text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Calculators: Algebra I
Direct, Inverse, and Joint Variation Calculator
The calculator will find the constant of variation and other values for the direct, inverse (indirect), joint, and combined variation problems, with steps shown.
as as square of as cube of as square root of as cube root of x y z u v w a b c d f j m n p q r s t varies
directly inversely as the power of of x y z u v w a b c d f j m n p q r s t `
directly inversely as the power of of x y z u v w a b c d f j m n p q r s t
Write here the 'find `k`' condition. For example, write x=3, y=5, z=15, if you are given "z=15, when x=3, y=5".
Write here the 'find variable' condition. For example, write y=2, z=10, if you are given "find x, when y=2, z=10".
If you don't need to find a variable, leave this field empty.
Your input: find the constant of variation $$$k$$$ given $$$z=k \frac{x^{2}}{y}$$$, $$$z=7$$$ when $$$x=5$$$ and $$$y=3$$$ and find $$$y$$$ when $$$z=12$$$ and $$$x=1$$$.
We have that $$$z=k \frac{x^{2}}{y}$$$.
Plug in the given values to find $$$k$$$: $$$7=k \cdot \frac{5^{2}}{3^{1}}$$$.
Solving this equation, we obtain that $$$k=\frac{21}{25}$$$.
Now find $$$\mathtt{\text{y}}$$$.
Plug in the given values and found $$$k$$$ to find $$$y$$$: $$$12=\frac{21 \frac{1^{2}}{y^{1}}}{25}$$$.
From this equation, we have that $$$y=\frac{7}{100}$$$.
Answer: the constant of variation is $$$k=\frac{21}{25}$$$, $$$y=\frac{7}{100}$$$. | CommonCrawl |
\begin{definition}[Definition:Matrix Theory]
'''Matrix theory''' is the field of mathematics which studies matrices.
\end{definition} | ProofWiki |
# 1. Descriptive Statistics
# 1.1. Measures of Central Tendency
Measures of central tendency are statistical measures that represent the center or average of a distribution. They provide a single value that summarizes the entire dataset. The three commonly used measures of central tendency are the mean, median, and mode.
The mean is the most commonly used measure of central tendency. It is calculated by summing up all the values in the dataset and dividing it by the total number of values. The mean is sensitive to extreme values and outliers.
The median is the middle value in a dataset when it is arranged in ascending or descending order. If there is an even number of values, the median is the average of the two middle values. The median is less affected by extreme values and outliers compared to the mean.
The mode is the value that appears most frequently in a dataset. It is useful for categorical or discrete data.
Suppose we have a dataset of exam scores: 85, 90, 92, 78, 85, 90, 95.
The mean can be calculated as:
$$\text{Mean} = \frac{85 + 90 + 92 + 78 + 85 + 90 + 95}{7} = 88.57$$
The median can be calculated by arranging the values in ascending order:
78, 85, 85, 90, 90, 92, 95
Since there are 7 values, the median is the 4th value, which is 90.
The mode of this dataset is 85, as it appears twice, which is more than any other value.
## Exercise
Calculate the mean, median, and mode for the following dataset: 10, 15, 20, 25, 30, 35, 40, 45, 50.
### Solution
Mean:
$$\text{Mean} = \frac{10 + 15 + 20 + 25 + 30 + 35 + 40 + 45 + 50}{9} = 30$$
Median:
Arrange the values in ascending order: 10, 15, 20, 25, 30, 35, 40, 45, 50
Since there are 9 values, the median is the 5th value, which is 30.
Mode:
There is no mode in this dataset as all the values appear only once.
# 1.2. Measures of Variability
Measures of variability, also known as measures of dispersion, are statistical measures that describe the spread or dispersion of a dataset. They provide information about how the values in a dataset are spread out around the measures of central tendency. The three commonly used measures of variability are the range, variance, and standard deviation.
The range is the simplest measure of variability. It is calculated by subtracting the minimum value from the maximum value in a dataset. The range provides an indication of the spread of the data, but it is sensitive to extreme values and outliers.
The variance is a more robust measure of variability. It measures the average squared deviation from the mean. To calculate the variance, subtract the mean from each value, square the result, and then calculate the average of the squared deviations.
The standard deviation is the square root of the variance. It provides a measure of the average distance between each data point and the mean. The standard deviation is widely used because it is in the same units as the original data.
Suppose we have a dataset of exam scores: 85, 90, 92, 78, 85, 90, 95.
To calculate the range, subtract the minimum value (78) from the maximum value (95):
Range = 95 - 78 = 17
To calculate the variance, first calculate the mean:
Mean = (85 + 90 + 92 + 78 + 85 + 90 + 95) / 7 = 88.57
Then, subtract the mean from each value, square the result, and calculate the average of the squared deviations:
Variance = ((85 - 88.57)^2 + (90 - 88.57)^2 + (92 - 88.57)^2 + (78 - 88.57)^2 + (85 - 88.57)^2 + (90 - 88.57)^2 + (95 - 88.57)^2) / 7 = 27.24
Finally, calculate the standard deviation by taking the square root of the variance:
Standard Deviation = sqrt(27.24) = 5.22
## Exercise
Calculate the range, variance, and standard deviation for the following dataset: 10, 15, 20, 25, 30, 35, 40, 45, 50.
### Solution
Range:
Range = 50 - 10 = 40
Variance:
Calculate the mean:
Mean = (10 + 15 + 20 + 25 + 30 + 35 + 40 + 45 + 50) / 9 = 27.78
Then, subtract the mean from each value, square the result, and calculate the average of the squared deviations:
Variance = ((10 - 27.78)^2 + (15 - 27.78)^2 + (20 - 27.78)^2 + (25 - 27.78)^2 + (30 - 27.78)^2 + (35 - 27.78)^2 + (40 - 27.78)^2 + (45 - 27.78)^2 + (50 - 27.78)^2) / 9 = 291.11
Standard Deviation:
Standard Deviation = sqrt(291.11) = 17.05
# 1.3. Graphical Representations
Graphical representations are visual tools that help us understand and interpret data. They provide a way to present data in a clear and concise manner, making it easier to identify patterns, trends, and relationships.
There are several types of graphical representations commonly used in data analysis, including:
1. Bar charts: Bar charts are used to compare the frequency or distribution of categorical data. They consist of bars of different heights, where the height represents the frequency or proportion of each category.
2. Histograms: Histograms are similar to bar charts, but they are used to represent the distribution of continuous data. The x-axis represents the range of values, divided into intervals or bins, and the y-axis represents the frequency or proportion of values within each bin.
3. Line charts: Line charts are used to show the trend or change in a variable over time or another continuous variable. They consist of points connected by lines, where the x-axis represents the independent variable and the y-axis represents the dependent variable.
4. Scatter plots: Scatter plots are used to visualize the relationship between two continuous variables. Each point on the plot represents the values of the two variables for a single observation.
5. Box plots: Box plots, also known as box-and-whisker plots, are used to display the distribution of a continuous variable. They provide information about the median, quartiles, and any outliers in the data.
6. Pie charts: Pie charts are used to represent the proportion or percentage of different categories in a dataset. The entire pie represents 100%, and each slice represents a different category.
Suppose we have a dataset of monthly sales for a company over the past year. We want to visualize the trend in sales over time.
A line chart would be an appropriate graphical representation for this data. The x-axis would represent the months, and the y-axis would represent the sales amount. Each point on the line chart would represent the sales amount for a specific month, and the line would connect these points to show the trend in sales over time.
## Exercise
Choose the appropriate graphical representation for the following scenarios:
1. Comparing the popularity of different programming languages among developers.
2. Showing the distribution of ages in a population.
3. Visualizing the relationship between hours studied and test scores.
4. Displaying the distribution of income levels in a city.
5. Representing the proportion of different types of vehicles on the road.
### Solution
1. Bar chart
2. Histogram
3. Scatter plot
4. Box plot
5. Pie chart
# 2. Probability and Distributions
Probability is a fundamental concept in statistics and data analysis. It is a measure of the likelihood of an event occurring. In data analysis, probability is used to make predictions, estimate unknown quantities, and make decisions based on uncertain information.
2.1. Basic Concepts of Probability
Probability is defined as a number between 0 and 1, where 0 represents an impossible event and 1 represents a certain event. The probability of an event occurring is denoted by P(event).
There are two types of events in probability:
1. Independent events: Independent events are events that are not influenced by each other. The occurrence of one event does not affect the probability of the other event occurring. For example, flipping a coin twice and getting heads on the first flip does not affect the probability of getting heads on the second flip.
2. Dependent events: Dependent events are events that are influenced by each other. The occurrence of one event affects the probability of the other event occurring. For example, drawing a card from a deck and not replacing it affects the probability of drawing a certain card on the next draw.
Suppose we have a standard deck of 52 playing cards. What is the probability of drawing a heart and then drawing a spade, without replacement?
The probability of drawing a heart on the first draw is 13/52, since there are 13 hearts in the deck. After drawing a heart, there are 51 cards left in the deck, with 12 of them being spades. Therefore, the probability of drawing a spade on the second draw, without replacement, is 12/51.
The probability of drawing a heart and then drawing a spade, without replacement, is the product of the individual probabilities:
P(heart and spade) = (13/52) * (12/51) = 1/17
## Exercise
1. Calculate the probability of rolling a 6 on a fair six-sided die.
2. Calculate the probability of flipping a coin three times and getting heads on all three flips.
3. Calculate the probability of drawing a red card and then drawing a black card, without replacement, from a standard deck of 52 playing cards.
### Solution
1. P(rolling a 6) = 1/6
2. P(heads on first flip) = 1/2
P(heads on second flip) = 1/2
P(heads on third flip) = 1/2
P(heads on all three flips) = (1/2) * (1/2) * (1/2) = 1/8
3. P(drawing a red card) = 26/52 = 1/2
P(drawing a black card after drawing a red card) = 26/51
P(drawing a red card and then drawing a black card) = (1/2) * (26/51) = 13/51
# 2.2. Types of Probability Distributions
A probability distribution is a mathematical function that describes the likelihood of different outcomes in a random experiment or process. Probability distributions are used to model and analyze data, make predictions, and estimate unknown quantities.
There are several types of probability distributions commonly used in statistics and data analysis:
1. Discrete probability distributions: Discrete probability distributions are used to model random variables that can only take on a finite or countable number of values. Examples of discrete probability distributions include the binomial distribution, the Poisson distribution, and the geometric distribution.
2. Continuous probability distributions: Continuous probability distributions are used to model random variables that can take on any value within a certain range. Examples of continuous probability distributions include the normal distribution, the exponential distribution, and the uniform distribution.
3. Joint probability distributions: Joint probability distributions are used to model the probability of multiple events occurring simultaneously. They are often used in multivariate analysis and can be represented by probability density functions or probability mass functions.
The binomial distribution is a discrete probability distribution that models the number of successes in a fixed number of independent Bernoulli trials. It is characterized by two parameters: the number of trials (n) and the probability of success (p) in each trial.
Suppose we have a fair coin and we flip it 10 times. What is the probability of getting exactly 5 heads?
The probability of getting exactly 5 heads in 10 coin flips can be calculated using the binomial distribution formula:
P(X = k) = (n choose k) * p^k * (1 - p)^(n - k)
where n is the number of trials, k is the number of successes, p is the probability of success, and (n choose k) is the binomial coefficient.
In this case, n = 10, k = 5, and p = 0.5 (since the coin is fair). Plugging these values into the formula, we get:
P(X = 5) = (10 choose 5) * (0.5)^5 * (1 - 0.5)^(10 - 5)
= 252 * 0.5^5 * 0.5^5
= 252 * 0.5^10
= 252 * 0.0009765625
= 0.24609375
Therefore, the probability of getting exactly 5 heads in 10 coin flips is approximately 0.2461.
## Exercise
1. Calculate the probability of getting at most 2 heads in 5 coin flips, assuming the coin is fair.
2. Calculate the probability of getting at least 4 tails in 8 coin flips, assuming the coin is fair.
3. Calculate the probability of rolling a sum of 7 on two fair six-sided dice.
### Solution
1. P(X <= 2) = P(X = 0) + P(X = 1) + P(X = 2)
= (5 choose 0) * (0.5)^0 * (1 - 0.5)^(5 - 0) + (5 choose 1) * (0.5)^1 * (1 - 0.5)^(5 - 1) + (5 choose 2) * (0.5)^2 * (1 - 0.5)^(5 - 2)
= 1 * 1 * 0.5^5 + 5 * 0.5 * 0.5^4 + 10 * 0.5^2 * 0.5^3
= 0.03125 + 0.15625 + 0.3125
= 0.5
2. P(X >= 4) = P(X = 4) + P(X = 5) + P(X = 6) + P(X = 7) + P(X = 8)
= (8 choose 4) * (0.5)^4 * (1 - 0.5)^(8 - 4) + (8 choose 5) * (0.5)^5 * (1 - 0.5)^(8 - 5) + (8 choose 6) * (0.5)^6 * (1 - 0.5)^(8 - 6) + (8 choose 7) * (0.5)^7 * (1 - 0.5)^(8 - 7) + (8 choose 8) * (0.5)^8 * (1 - 0.5)^(8 - 8)
= 70 * 0.5^4 * 0.5^4 + 56 * 0.5^5 * 0.5^3 + 28 * 0.5^6 * 0.5^2 + 8 * 0.5^7 * 0.5^1 + 1 * 0.5^8 * 0.5^0
= 0.2734375 + 0.109375 + 0.0439453125 + 0.015625 + 0.00390625
= 0.4462890625
3. P(sum = 7) = P(1, 6) + P(2, 5) + P(3, 4) + P(4, 3) + P(5, 2) + P(6, 1)
= 1/36 + 1/36 + 1/36 + 1/36 + 1/36 + 1/36
= 1/6
# 2.2. Types of Probability Distributions
In addition to discrete and continuous probability distributions, there are other types of probability distributions that are commonly used in statistics and data analysis:
4. Uniform distribution: The uniform distribution is a continuous probability distribution where all outcomes have equal probabilities. In other words, it is a flat distribution where every value within a certain range is equally likely to occur.
5. Exponential distribution: The exponential distribution is a continuous probability distribution that models the time between events in a Poisson process. It is often used to model the time until the next occurrence of an event.
6. Poisson distribution: The Poisson distribution is a discrete probability distribution that models the number of events that occur in a fixed interval of time or space. It is often used to model rare events that occur independently of each other.
7. Normal distribution: The normal distribution, also known as the Gaussian distribution, is a continuous probability distribution that is symmetric and bell-shaped. It is widely used in statistics and data analysis due to its mathematical properties and its ability to approximate many natural phenomena.
8. Log-normal distribution: The log-normal distribution is a continuous probability distribution that is derived from the logarithm of a normal distribution. It is often used to model data that is skewed to the right, such as income or stock prices.
9. Gamma distribution: The gamma distribution is a continuous probability distribution that is often used to model the waiting time until a specified number of events occur in a Poisson process. It is also used to model the distribution of the sum of exponentially distributed random variables.
10. Beta distribution: The beta distribution is a continuous probability distribution that is often used to model random variables that have values between 0 and 1. It is commonly used in Bayesian statistics and in modeling proportions or probabilities.
11. Chi-square distribution: The chi-square distribution is a continuous probability distribution that is derived from the sum of squared standard normal random variables. It is often used in hypothesis testing and in constructing confidence intervals for the variance of a normal distribution.
12. Student's t-distribution: The t-distribution is a continuous probability distribution that is used when the sample size is small and the population standard deviation is unknown. It is often used in hypothesis testing and in constructing confidence intervals for the mean of a normal distribution.
Let's consider an example of the normal distribution. The normal distribution is characterized by two parameters: the mean (μ) and the standard deviation (σ). The mean determines the center of the distribution, while the standard deviation determines the spread.
Suppose we have a population of adult heights that follows a normal distribution with a mean of 170 cm and a standard deviation of 5 cm. We can use the normal distribution to answer questions about the probability of certain height ranges.
For example, what is the probability that a randomly selected adult is taller than 180 cm? To answer this question, we can standardize the value using the z-score formula:
z = (x - μ) / σ
where x is the value of interest, μ is the mean, and σ is the standard deviation.
In this case, x = 180, μ = 170, and σ = 5. Plugging these values into the formula, we get:
z = (180 - 170) / 5
= 10 / 5
= 2
We can then use a standard normal distribution table or a statistical software to find the probability associated with a z-score of 2. In this case, the probability is approximately 0.0228, or 2.28%.
## Exercise
1. Calculate the z-score for a height of 165 cm in the population of adult heights described above.
2. Calculate the probability that a randomly selected adult is shorter than 160 cm.
3. Calculate the probability that a randomly selected adult is between 165 cm and 175 cm.
### Solution
1. z = (165 - 170) / 5
= -1
2. P(X < 160) = P(Z < (160 - 170) / 5)
= P(Z < -2)
= 0.0228 (approximately)
3. P(165 < X < 175) = P((165 - 170) / 5 < Z < (175 - 170) / 5)
= P(-1 < Z < 1)
= P(Z < 1) - P(Z < -1)
= 0.8413 - 0.1587
= 0.6826 (approximately)
# 2.3. Normal Distribution and Z-Scores
The normal distribution, also known as the Gaussian distribution, is a continuous probability distribution that is widely used in statistics and data analysis. It is characterized by its bell-shaped curve, which is symmetric and centered around the mean.
The normal distribution is defined by two parameters: the mean (μ) and the standard deviation (σ). The mean determines the center of the distribution, while the standard deviation determines the spread or variability.
The probability density function (PDF) of the normal distribution is given by the formula:
$$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$
where x is the value of interest, μ is the mean, and σ is the standard deviation.
Z-scores, also known as standard scores, are used to standardize values from a normal distribution. A z-score represents the number of standard deviations a value is from the mean. The formula for calculating the z-score is:
$$z = \frac{x - \mu}{\sigma}$$
where x is the value, μ is the mean, and σ is the standard deviation.
Z-scores are useful for comparing values from different normal distributions or for finding probabilities associated with specific values.
Suppose we have a population of test scores that follows a normal distribution with a mean of 80 and a standard deviation of 10. We can use the normal distribution and z-scores to answer questions about the probability of certain test scores.
For example, what is the probability that a randomly selected student scores above 90? To answer this question, we can calculate the z-score for a score of 90:
$$z = \frac{90 - 80}{10} = 1$$
Using a standard normal distribution table or a statistical software, we can find the probability associated with a z-score of 1. In this case, the probability is approximately 0.8413, or 84.13%.
## Exercise
1. Calculate the z-score for a test score of 75 in the population of test scores described above.
2. Calculate the probability that a randomly selected student scores below 70.
3. Calculate the probability that a randomly selected student scores between 85 and 95.
### Solution
1. $$z = \frac{75 - 80}{10} = -0.5$$
2. $$P(X < 70) = P(Z < \frac{70 - 80}{10}) = P(Z < -1) = 0.1587$$
3. $$P(85 < X < 95) = P(\frac{85 - 80}{10} < Z < \frac{95 - 80}{10}) = P(-0.5 < Z < 1.5) = P(Z < 1.5) - P(Z < -0.5) = 0.9332 - 0.3085 = 0.6247$$
# 3. Sampling and Sampling Distributions
Sampling is the process of selecting a subset of individuals or items from a larger population. It is a common practice in statistics and data analysis because it allows us to make inferences about the population based on the information collected from the sample.
There are several types of sampling methods, including random sampling, stratified sampling, and cluster sampling. Each method has its own advantages and disadvantages, and the choice of sampling method depends on the research question and the characteristics of the population.
Random sampling is a commonly used sampling method in which each individual or item in the population has an equal chance of being selected for the sample. This method ensures that the sample is representative of the population and reduces the potential for bias.
Stratified sampling is a sampling method in which the population is divided into homogeneous groups called strata, and a random sample is selected from each stratum. This method ensures that each stratum is represented in the sample and allows for more precise estimates for subgroups of the population.
Cluster sampling is a sampling method in which the population is divided into clusters, and a random sample of clusters is selected. Then, all individuals or items within the selected clusters are included in the sample. This method is useful when it is difficult or expensive to obtain a complete list of individuals or items in the population.
Suppose we want to estimate the average height of students in a university. The population consists of all students in the university. We could use random sampling to select a sample of students, measure their heights, and calculate the average height. This estimate would then be used to make inferences about the average height of all students in the university.
## Exercise
1. What is the advantage of random sampling?
2. When would you use stratified sampling?
3. When would you use cluster sampling?
### Solution
1. The advantage of random sampling is that it ensures that each individual or item in the population has an equal chance of being selected for the sample, which reduces the potential for bias and ensures that the sample is representative of the population.
2. Stratified sampling is used when the population can be divided into homogeneous groups called strata, and we want to ensure that each stratum is represented in the sample. This allows for more precise estimates for subgroups of the population.
3. Cluster sampling is used when it is difficult or expensive to obtain a complete list of individuals or items in the population. It involves dividing the population into clusters and selecting a random sample of clusters, then including all individuals or items within the selected clusters in the sample.
# 3.1. Types of Sampling Methods
There are several types of sampling methods that researchers use to select a subset of individuals or items from a larger population. Each method has its own advantages and disadvantages, and the choice of sampling method depends on the research question and the characteristics of the population.
1. Random Sampling: In random sampling, each individual or item in the population has an equal chance of being selected for the sample. This method ensures that the sample is representative of the population and reduces the potential for bias. Random sampling is commonly used in research studies and surveys.
2. Stratified Sampling: Stratified sampling involves dividing the population into homogeneous groups called strata, and then selecting a random sample from each stratum. This method ensures that each stratum is represented in the sample and allows for more precise estimates for subgroups of the population. Stratified sampling is useful when there are distinct subgroups within the population that may have different characteristics.
3. Cluster Sampling: Cluster sampling involves dividing the population into clusters, and then selecting a random sample of clusters. All individuals or items within the selected clusters are included in the sample. This method is useful when it is difficult or expensive to obtain a complete list of individuals or items in the population. Cluster sampling is commonly used in studies that involve geographical areas or organizations.
4. Systematic Sampling: Systematic sampling involves selecting individuals or items from a population at regular intervals. For example, if a researcher wants to select a sample of 100 students from a school with 1000 students, they could select every 10th student from a list of all students. Systematic sampling is a simple and efficient method, but it may introduce bias if there is a pattern in the population.
5. Convenience Sampling: Convenience sampling involves selecting individuals or items that are easily accessible or convenient for the researcher. This method is often used in pilot studies or exploratory research, but it may introduce bias because the sample may not be representative of the population.
It is important for researchers to carefully consider the sampling method they use to ensure that their sample is representative of the population and that their findings can be generalized.
# 3.2. Sampling Bias and Random Sampling
Sampling bias refers to a systematic error in the sampling process that results in a sample that is not representative of the population. This can occur when certain individuals or items in the population are more likely to be included in the sample than others. Sampling bias can lead to inaccurate and misleading results.
Random sampling is a method that helps to reduce sampling bias by ensuring that each individual or item in the population has an equal chance of being selected for the sample. This method helps to minimize the potential for bias and increase the generalizability of the findings.
Random sampling can be achieved through various techniques, such as using random number generators or random sampling tables. Researchers can also use computer software to generate random samples. The key idea is that each individual or item in the population should have an equal probability of being selected.
By using random sampling, researchers can increase the likelihood that their sample is representative of the population and that their findings can be generalized. This is important for drawing accurate conclusions and making valid inferences about the population.
Let's say a researcher wants to study the eating habits of college students. They decide to recruit participants by standing outside the campus cafeteria and asking students if they would like to participate in the study. This convenience sampling method may introduce sampling bias because it only includes students who eat at the cafeteria and are available at that specific time. The sample may not be representative of all college students and may not accurately reflect their eating habits.
To reduce sampling bias, the researcher could use random sampling instead. They could obtain a list of all college students and use a random number generator to select a sample of participants. This would ensure that each student has an equal chance of being selected and increase the likelihood of a representative sample.
## Exercise
Which sampling method would be most appropriate in the following scenarios? Explain your reasoning.
1. A researcher wants to study the average income of households in a city. The city has distinct neighborhoods with different socioeconomic characteristics.
2. A marketing company wants to conduct a survey to gather feedback on a new product. They plan to recruit participants from a shopping mall.
3. A researcher wants to study the prevalence of a rare disease in a population. The disease is known to be more common in certain geographical areas.
### Solution
1. Stratified sampling would be most appropriate in this scenario. By dividing the city into neighborhoods and selecting a random sample from each neighborhood, the researcher can ensure that each socioeconomic group is represented in the sample.
2. Convenience sampling is being used in this scenario. The sample may not be representative of the target population because it only includes individuals who happen to be at the shopping mall. A better approach would be to use random sampling to select participants from a larger population.
3. Cluster sampling would be most appropriate in this scenario. By dividing the population into geographical areas and selecting a random sample of areas, the researcher can include individuals from areas where the disease is more prevalent.
# 3.3. Sampling Distributions and Central Limit Theorem
Sampling distributions are probability distributions that describe the behavior of a statistic (such as the mean or proportion) calculated from multiple samples of the same size taken from a population. They provide important information about the variability and distribution of the statistic.
The Central Limit Theorem (CLT) is a fundamental concept in statistics that states that, regardless of the shape of the population distribution, the sampling distribution of the mean approaches a normal distribution as the sample size increases. This is true even if the population distribution is not normally distributed.
The CLT has several key implications. First, it allows us to make inferences about the population mean based on the sample mean. Second, it provides a basis for hypothesis testing and confidence interval estimation. Finally, it allows us to use parametric statistical methods (such as t-tests and ANOVA) that assume normality.
Let's say we want to estimate the average height of adult males in a certain country. We take multiple random samples of 100 adult males each and calculate the mean height for each sample. If we plot the distribution of these sample means, we would expect it to be approximately normally distributed, even if the heights in the population are not normally distributed.
This is because the CLT tells us that as the sample size increases, the distribution of sample means becomes more and more normal. This allows us to use the properties of the normal distribution to make inferences about the population mean.
## Exercise
1. True or False: The Central Limit Theorem applies to any population distribution.
2. True or False: The Central Limit Theorem guarantees that the sample mean will be exactly equal to the population mean.
3. True or False: The Central Limit Theorem applies to sample proportions as well as sample means.
### Solution
1. True. The Central Limit Theorem applies to any population distribution, regardless of its shape.
2. False. The Central Limit Theorem states that as the sample size increases, the distribution of sample means approaches a normal distribution centered around the population mean. It does not guarantee that the sample mean will be exactly equal to the population mean.
3. True. The Central Limit Theorem applies to any statistic that is calculated from multiple samples of the same size, including sample proportions.
# 4. Hypothesis Testing
Hypothesis testing is a fundamental concept in statistics that allows us to make decisions or draw conclusions about a population based on sample data. It involves setting up a null hypothesis and an alternative hypothesis, collecting data, and using statistical tests to determine whether the data provides enough evidence to reject the null hypothesis in favor of the alternative hypothesis.
The null hypothesis (H0) represents the status quo or the default assumption. It states that there is no significant difference or relationship between variables in the population. The alternative hypothesis (Ha) represents the researcher's claim or the hypothesis that contradicts the null hypothesis. It states that there is a significant difference or relationship between variables in the population.
The goal of hypothesis testing is to assess the strength of evidence against the null hypothesis and make a decision based on that evidence. This decision is typically made by calculating a test statistic and comparing it to a critical value or by calculating a p-value.
Let's say a researcher wants to test whether a new drug is effective in reducing blood pressure. The null hypothesis would be that the drug has no effect on blood pressure, while the alternative hypothesis would be that the drug does have an effect.
The researcher would then collect data by randomly assigning participants to either a control group (receiving a placebo) or an experimental group (receiving the new drug). After a certain period of time, the researcher would measure the participants' blood pressure and compare the means of the two groups.
If the difference in means is large enough, the researcher would calculate a test statistic (such as a t-statistic) and compare it to a critical value from a statistical table. If the test statistic is greater than the critical value, the researcher would reject the null hypothesis and conclude that there is evidence to support the alternative hypothesis.
## Exercise
1. What is the purpose of hypothesis testing?
2. What does the null hypothesis represent?
3. How is the alternative hypothesis different from the null hypothesis?
### Solution
1. The purpose of hypothesis testing is to make decisions or draw conclusions about a population based on sample data.
2. The null hypothesis represents the status quo or the default assumption. It states that there is no significant difference or relationship between variables in the population.
3. The alternative hypothesis represents the researcher's claim or the hypothesis that contradicts the null hypothesis. It states that there is a significant difference or relationship between variables in the population.
# 4.1. Null and Alternative Hypotheses
In hypothesis testing, the null hypothesis (H0) is a statement of no effect or no difference. It represents the status quo or the default assumption. The null hypothesis assumes that there is no significant difference or relationship between variables in the population.
The alternative hypothesis (Ha), on the other hand, is a statement that contradicts the null hypothesis. It represents the researcher's claim or the hypothesis that there is a significant difference or relationship between variables in the population.
The null and alternative hypotheses are mutually exclusive and exhaustive. This means that they cover all possible outcomes or explanations for the observed data. The goal of hypothesis testing is to gather evidence to either reject the null hypothesis in favor of the alternative hypothesis or fail to reject the null hypothesis.
Let's consider an example to better understand null and alternative hypotheses. Suppose a researcher wants to test whether a new teaching method improves students' test scores. The null hypothesis would be that there is no significant difference in test scores between students taught with the new method and students taught with the traditional method. The alternative hypothesis would be that there is a significant difference in test scores between the two groups.
The null hypothesis (H0): The new teaching method has no effect on students' test scores.
The alternative hypothesis (Ha): The new teaching method has a significant effect on students' test scores.
The researcher would collect data by randomly assigning students to either the new teaching method group or the traditional teaching method group. After a certain period of time, the researcher would compare the mean test scores of the two groups and use statistical tests to determine whether there is enough evidence to reject the null hypothesis in favor of the alternative hypothesis.
## Exercise
1. What is the purpose of the null hypothesis?
2. What is the purpose of the alternative hypothesis?
3. Can both the null hypothesis and the alternative hypothesis be true at the same time?
### Solution
1. The purpose of the null hypothesis is to represent the status quo or the default assumption. It assumes that there is no significant difference or relationship between variables in the population.
2. The purpose of the alternative hypothesis is to represent the researcher's claim or the hypothesis that contradicts the null hypothesis. It states that there is a significant difference or relationship between variables in the population.
3. No, the null hypothesis and the alternative hypothesis are mutually exclusive. They cover all possible outcomes or explanations for the observed data. Only one of them can be true.
# 4.2. Type I and Type II Errors
In hypothesis testing, there are two types of errors that can occur: Type I error and Type II error.
A Type I error occurs when the null hypothesis is rejected, but it is actually true. In other words, it is a false positive. This means that the researcher concludes that there is a significant difference or relationship between variables when there is actually none in the population. The probability of making a Type I error is denoted by α (alpha) and is called the significance level.
A Type II error occurs when the null hypothesis is not rejected, but it is actually false. In other words, it is a false negative. This means that the researcher fails to detect a significant difference or relationship between variables when there is actually one in the population. The probability of making a Type II error is denoted by β (beta).
Let's continue with the example of testing a new teaching method. Suppose the researcher conducted a hypothesis test and rejected the null hypothesis, concluding that the new teaching method has a significant effect on students' test scores. However, in reality, the new teaching method has no effect on test scores (null hypothesis is true). This would be a Type I error.
On the other hand, if the researcher failed to reject the null hypothesis, concluding that there is no significant difference in test scores between the two groups, when in fact the new teaching method does have a significant effect (alternative hypothesis is true), this would be a Type II error.
## Exercise
1. What is a Type I error?
2. What is a Type II error?
3. Which type of error is more serious in your opinion? Why?
### Solution
1. A Type I error is a false positive. It occurs when the null hypothesis is rejected, but it is actually true.
2. A Type II error is a false negative. It occurs when the null hypothesis is not rejected, but it is actually false.
3. The seriousness of Type I and Type II errors depends on the context and consequences of the decision. In some cases, a Type I error may be more serious because it leads to false conclusions and potentially wrong actions. In other cases, a Type II error may be more serious because it means failing to detect a real effect or relationship.
# 4.3. One-Sample and Two-Sample Tests
In hypothesis testing, there are different types of tests depending on the number of samples being compared. The two main types are one-sample tests and two-sample tests.
A one-sample test is used when comparing a sample mean or proportion to a known population mean or proportion. It is used to determine if the sample is significantly different from the population.
A two-sample test is used when comparing the means or proportions of two independent samples. It is used to determine if there is a significant difference between the two groups.
Let's say a researcher wants to test if a new medication is effective in reducing blood pressure. They collect a sample of patients and measure their blood pressure before and after taking the medication. The researcher can use a one-sample test to compare the average blood pressure before and after taking the medication to see if there is a significant difference.
On the other hand, if the researcher wants to compare the blood pressure of patients who took the medication to those who did not take the medication, they would use a two-sample test to determine if there is a significant difference between the two groups.
## Exercise
1. When would you use a one-sample test?
2. When would you use a two-sample test?
### Solution
1. A one-sample test is used when comparing a sample mean or proportion to a known population mean or proportion. It is used to determine if the sample is significantly different from the population.
2. A two-sample test is used when comparing the means or proportions of two independent samples. It is used to determine if there is a significant difference between the two groups.
# 4.4. Paired and Independent Samples
In hypothesis testing, there are two types of two-sample tests: paired samples and independent samples.
Paired samples are used when the two samples being compared are related or matched in some way. For example, if we want to compare the effectiveness of two different study techniques, we can use a paired samples test by having each participant try both techniques and measuring their performance.
Independent samples, on the other hand, are used when the two samples being compared are unrelated or not matched. For example, if we want to compare the average heights of men and women, we can use an independent samples test by collecting separate samples of men and women and comparing their heights.
Let's say a researcher wants to compare the effectiveness of two different diets in reducing weight. They recruit a group of participants and randomly assign them to either Diet A or Diet B. After a certain period of time, they measure the participants' weight and compare the average weight loss between the two groups. In this case, an independent samples test would be appropriate because the two groups (Diet A and Diet B) are unrelated.
On the other hand, if the researcher wants to compare the effectiveness of a weight loss program before and after a specific intervention, they can use a paired samples test. They would measure the participants' weight before the intervention, then implement the intervention, and finally measure their weight again. The paired samples test would compare the average weight loss within each participant.
## Exercise
1. When would you use a paired samples test?
2. When would you use an independent samples test?
### Solution
1. A paired samples test is used when the two samples being compared are related or matched in some way. It is used to determine if there is a significant difference within the pairs.
2. An independent samples test is used when the two samples being compared are unrelated or not matched. It is used to determine if there is a significant difference between the two groups.
# 5. t-Tests and ANOVA
t-tests and ANOVA (Analysis of Variance) are statistical tests used to compare means between groups. They are commonly used in hypothesis testing to determine if there is a significant difference between the means of two or more groups.
t-tests are used when comparing the means of two groups. There are different types of t-tests depending on the characteristics of the data and the research question. Some common types of t-tests include the independent samples t-test, paired samples t-test, and one-sample t-test.
ANOVA, on the other hand, is used when comparing the means of three or more groups. It allows us to determine if there is a significant difference between the means of the groups, and if so, which specific groups differ from each other. ANOVA is based on the analysis of variance, which measures the variation between groups and within groups.
Let's say a researcher wants to compare the effectiveness of three different teaching methods on student performance. They randomly assign students to one of the three teaching methods and measure their test scores at the end of the semester. To analyze the data, the researcher can use ANOVA to determine if there is a significant difference in the mean test scores between the three groups.
On the other hand, if the researcher wants to compare the average heights of men and women, they can use an independent samples t-test. They would collect separate samples of men and women and compare their heights using the t-test.
## Exercise
1. When would you use a t-test?
2. When would you use ANOVA?
### Solution
1. A t-test is used when comparing the means of two groups. It is used to determine if there is a significant difference between the means of the two groups.
2. ANOVA is used when comparing the means of three or more groups. It is used to determine if there is a significant difference between the means of the groups, and if so, which specific groups differ from each other.
# 5.1. One-Sample and Two-Sample t-Tests
There are different types of t-tests that can be used depending on the characteristics of the data and the research question. Two common types of t-tests are the one-sample t-test and the two-sample t-test.
The one-sample t-test is used when you want to compare the mean of a single sample to a known value or theoretical expectation. For example, let's say you want to test if the average height of a certain population is significantly different from the national average height. You would collect a sample of heights from the population and compare it to the known national average height using a one-sample t-test.
The two-sample t-test, on the other hand, is used when you want to compare the means of two independent samples. For example, let's say you want to test if there is a significant difference in the average test scores of students who received tutoring and those who did not. You would collect two separate samples, one from each group, and compare their means using a two-sample t-test.
Both the one-sample t-test and the two-sample t-test calculate a t-value and a p-value. The t-value measures the difference between the means relative to the variability within the groups, while the p-value indicates the probability of observing such a difference by chance alone.
Suppose you are a researcher studying the effectiveness of a new teaching method on student performance. You collect test scores from a random sample of 30 students who were taught using the new method and find that the average test score is 85. You want to determine if this average is significantly different from the population mean test score of 80.
To test this, you can use a one-sample t-test. The null hypothesis would be that there is no difference between the sample mean and the population mean (i.e., the true mean is equal to 80). The alternative hypothesis would be that there is a significant difference (i.e., the true mean is not equal to 80).
By performing the one-sample t-test, you calculate a t-value of 2.5 and a p-value of 0.015. Since the p-value is less than the commonly used significance level of 0.05, you reject the null hypothesis and conclude that there is a significant difference between the sample mean and the population mean.
## Exercise
1. When would you use a one-sample t-test?
2. When would you use a two-sample t-test?
### Solution
1. A one-sample t-test is used when you want to compare the mean of a single sample to a known value or theoretical expectation.
2. A two-sample t-test is used when you want to compare the means of two independent samples.
# 5.2. ANOVA and Post-Hoc Tests
Analysis of Variance (ANOVA) is a statistical test used to compare the means of three or more groups. It is an extension of the two-sample t-test and allows for the comparison of multiple groups simultaneously.
ANOVA works by partitioning the total variation in the data into two components: the variation between groups and the variation within groups. The between-group variation measures the differences in means between the groups, while the within-group variation measures the variability within each group.
The ANOVA test calculates an F-statistic and a p-value. The F-statistic compares the between-group variation to the within-group variation, and the p-value indicates the probability of observing such a difference by chance alone.
If the p-value is below a predetermined significance level (usually 0.05), it suggests that there is a significant difference between at least one pair of groups. However, ANOVA does not tell us which specific groups are different from each other. To determine this, post-hoc tests are performed.
Post-hoc tests are used to make pairwise comparisons between groups after a significant result is obtained from the ANOVA test. These tests help identify which specific groups are significantly different from each other.
Suppose you are a researcher studying the effects of three different diets on weight loss. You randomly assign participants to one of three groups: Group A follows Diet 1, Group B follows Diet 2, and Group C follows Diet 3. After a specified period, you measure the weight loss for each participant.
To analyze the data, you perform an ANOVA test to compare the means of the three groups. The null hypothesis would be that there is no difference in weight loss between the three diets (i.e., the true means are equal). The alternative hypothesis would be that there is a significant difference in weight loss between at least one pair of diets (i.e., the true means are not all equal).
If the ANOVA test yields a significant result (p-value < 0.05), you can proceed with post-hoc tests to determine which specific diets are significantly different from each other. Common post-hoc tests include Tukey's Honestly Significant Difference (HSD) test, the Bonferroni correction, and the Scheffe test.
## Exercise
1. What does ANOVA stand for?
2. What does the F-statistic compare in ANOVA?
3. When would you use post-hoc tests?
### Solution
1. ANOVA stands for Analysis of Variance.
2. The F-statistic compares the between-group variation to the within-group variation in ANOVA.
3. Post-hoc tests are used to make pairwise comparisons between groups after a significant result is obtained from the ANOVA test.
# 5.3. Assumptions and Interpretation
When conducting an ANOVA test, there are several assumptions that need to be met in order for the results to be valid. Violating these assumptions can lead to inaccurate conclusions.
The assumptions of ANOVA include:
1. Independence: The observations within each group are independent of each other.
2. Normality: The distribution of the residuals (the differences between the observed values and the group means) is approximately normal.
3. Homogeneity of variances: The variances of the residuals are equal across all groups.
If these assumptions are not met, alternative tests or transformations of the data may be necessary.
Interpreting the results of an ANOVA test involves examining the F-statistic and the p-value. The F-statistic measures the ratio of between-group variation to within-group variation. A larger F-statistic indicates a larger difference between the group means relative to the variability within each group.
The p-value indicates the probability of observing such a difference by chance alone. If the p-value is below the significance level (usually 0.05), it suggests that there is a significant difference between at least one pair of groups.
However, ANOVA does not tell us which specific groups are different from each other. To determine this, post-hoc tests are performed.
Suppose you conducted an ANOVA test to compare the mean scores of three different teaching methods: Method A, Method B, and Method C. The null hypothesis would be that there is no difference in mean scores between the three methods (i.e., the true means are equal). The alternative hypothesis would be that there is a significant difference in mean scores between at least one pair of methods (i.e., the true means are not all equal).
If the ANOVA test yields a significant result (p-value < 0.05), you can proceed with post-hoc tests to determine which specific methods are significantly different from each other. For example, you might find that Method A and Method B have significantly higher mean scores compared to Method C.
Interpreting the results of an ANOVA test also involves considering the effect size. Common effect size measures for ANOVA include eta-squared (η²) and partial eta-squared (η²p). These measures quantify the proportion of variability in the dependent variable that can be attributed to the independent variable(s).
## Exercise
1. What are the assumptions of ANOVA?
2. What does the F-statistic measure in ANOVA?
3. What does a significant p-value indicate in ANOVA?
### Solution
1. The assumptions of ANOVA include independence, normality, and homogeneity of variances.
2. The F-statistic measures the ratio of between-group variation to within-group variation in ANOVA.
3. A significant p-value in ANOVA indicates that there is a significant difference between at least one pair of groups.
# 6. Regression Analysis
Regression analysis is a statistical method used to model the relationship between a dependent variable and one or more independent variables. It is commonly used to predict or estimate the value of the dependent variable based on the values of the independent variables.
There are different types of regression analysis, but the most common type is simple linear regression. Simple linear regression involves fitting a straight line to a scatter plot of data points. The equation of the line is given by:
$$y = \beta_0 + \beta_1x$$
where $y$ is the dependent variable, $x$ is the independent variable, $\beta_0$ is the y-intercept, and $\beta_1$ is the slope of the line.
The goal of regression analysis is to estimate the values of the coefficients $\beta_0$ and $\beta_1$ that minimize the sum of the squared differences between the observed values of the dependent variable and the values predicted by the regression equation.
Multiple linear regression is an extension of simple linear regression that allows for more than one independent variable. The equation for multiple linear regression is given by:
$$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \ldots + \beta_nx_n$$
where $y$ is the dependent variable, $x_1, x_2, \ldots, x_n$ are the independent variables, and $\beta_0, \beta_1, \beta_2, \ldots, \beta_n$ are the coefficients.
The coefficients in multiple linear regression represent the change in the dependent variable for a one-unit change in the corresponding independent variable, holding all other independent variables constant.
Suppose we want to predict a student's final exam score based on their study time and the number of practice tests they took. We collect data from 50 students and perform a multiple linear regression analysis.
The regression equation we obtain is:
$$\text{Final Exam Score} = 50 + 5(\text{Study Time}) + 3(\text{Number of Practice Tests})$$
This equation tells us that, on average, for every additional hour of study time, a student's final exam score is expected to increase by 5 points, holding the number of practice tests constant. Similarly, for every additional practice test, a student's final exam score is expected to increase by 3 points, holding study time constant.
## Exercise
1. What is the equation for simple linear regression?
2. What is the equation for multiple linear regression?
3. How do you interpret the coefficients in multiple linear regression?
### Solution
1. The equation for simple linear regression is $y = \beta_0 + \beta_1x$, where $y$ is the dependent variable, $x$ is the independent variable, $\beta_0$ is the y-intercept, and $\beta_1$ is the slope of the line.
2. The equation for multiple linear regression is $y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \ldots + \beta_nx_n$, where $y$ is the dependent variable, $x_1, x_2, \ldots, x_n$ are the independent variables, and $\beta_0, \beta_1, \beta_2, \ldots, \beta_n$ are the coefficients.
3. The coefficients in multiple linear regression represent the change in the dependent variable for a one-unit change in the corresponding independent variable, holding all other independent variables constant.
# 6.1. Simple Linear Regression
Simple linear regression is a statistical method used to model the relationship between two variables: a dependent variable and an independent variable. It assumes that there is a linear relationship between the two variables.
The equation for simple linear regression is:
$$y = \beta_0 + \beta_1x$$
where $y$ is the dependent variable, $x$ is the independent variable, $\beta_0$ is the y-intercept, and $\beta_1$ is the slope of the line.
The goal of simple linear regression is to estimate the values of $\beta_0$ and $\beta_1$ that minimize the sum of the squared differences between the observed values of the dependent variable and the values predicted by the regression equation.
To estimate the values of $\beta_0$ and $\beta_1$, we use a method called least squares estimation. This method calculates the values of $\beta_0$ and $\beta_1$ that minimize the sum of the squared differences between the observed values of the dependent variable and the values predicted by the regression equation.
Once we have estimated the values of $\beta_0$ and $\beta_1$, we can use the regression equation to make predictions. For example, if we want to predict the value of the dependent variable for a given value of the independent variable, we can plug that value into the regression equation.
Suppose we want to model the relationship between the number of hours studied and the score on a math test. We collect data from 20 students and perform a simple linear regression analysis.
The regression equation we obtain is:
$$\text{Test Score} = 60 + 5(\text{Hours Studied})$$
This equation tells us that, on average, for every additional hour studied, a student's test score is expected to increase by 5 points.
## Exercise
1. What is the equation for simple linear regression?
2. What does the slope of the regression line represent?
3. How can we use the regression equation to make predictions?
### Solution
1. The equation for simple linear regression is $y = \beta_0 + \beta_1x$, where $y$ is the dependent variable, $x$ is the independent variable, $\beta_0$ is the y-intercept, and $\beta_1$ is the slope of the line.
2. The slope of the regression line represents the change in the dependent variable for a one-unit change in the independent variable.
3. We can use the regression equation to make predictions by plugging a value of the independent variable into the equation and solving for the dependent variable.
# 6.2. Multiple Linear Regression
Multiple linear regression is an extension of simple linear regression that allows for the modeling of the relationship between a dependent variable and multiple independent variables. It assumes that there is a linear relationship between the dependent variable and each of the independent variables.
The equation for multiple linear regression is:
$$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$$
where $y$ is the dependent variable, $x_1, x_2, ..., x_n$ are the independent variables, and $\beta_0, \beta_1, \beta_2, ..., \beta_n$ are the coefficients that represent the effect of each independent variable on the dependent variable.
The goal of multiple linear regression is to estimate the values of $\beta_0, \beta_1, \beta_2, ..., \beta_n$ that minimize the sum of the squared differences between the observed values of the dependent variable and the values predicted by the regression equation.
To estimate the values of $\beta_0, \beta_1, \beta_2, ..., \beta_n$, we again use the method of least squares estimation. This method calculates the values of the coefficients that minimize the sum of the squared differences between the observed values of the dependent variable and the values predicted by the regression equation.
Once we have estimated the values of $\beta_0, \beta_1, \beta_2, ..., \beta_n$, we can use the regression equation to make predictions. For example, if we want to predict the value of the dependent variable for a given set of values for the independent variables, we can plug those values into the regression equation.
Suppose we want to model the relationship between a person's height, weight, and age, and their blood pressure. We collect data from 100 individuals and perform a multiple linear regression analysis.
The regression equation we obtain is:
$$\text{Blood Pressure} = 80 + 0.5(\text{Height}) + 0.2(\text{Weight}) - 1(\text{Age})$$
This equation tells us that, on average, for every additional inch in height, a person's blood pressure is expected to increase by 0.5 units. Similarly, for every additional pound in weight, a person's blood pressure is expected to increase by 0.2 units. However, for every additional year in age, a person's blood pressure is expected to decrease by 1 unit.
## Exercise
1. What is the equation for multiple linear regression?
2. What do the coefficients in the regression equation represent?
3. How can we use the regression equation to make predictions?
### Solution
1. The equation for multiple linear regression is $y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$, where $y$ is the dependent variable, $x_1, x_2, ..., x_n$ are the independent variables, and $\beta_0, \beta_1, \beta_2, ..., \beta_n$ are the coefficients.
2. The coefficients in the regression equation represent the effect of each independent variable on the dependent variable. They tell us how much the dependent variable is expected to change for a one-unit change in each independent variable, holding all other variables constant.
3. We can use the regression equation to make predictions by plugging values of the independent variables into the equation and solving for the dependent variable.
# 6.3. Assumptions and Model Evaluation
When performing multiple linear regression, there are several assumptions that must be met for the model to be valid. Violation of these assumptions can lead to inaccurate or misleading results.
1. Linearity: The relationship between the dependent variable and each independent variable is linear. This means that the effect of each independent variable on the dependent variable is constant across different values of the independent variable.
2. Independence: The observations are independent of each other. This means that the value of the dependent variable for one observation does not depend on the value of the dependent variable for another observation.
3. Homoscedasticity: The variance of the errors is constant across different values of the independent variables. This means that the spread of the residuals (the differences between the observed values of the dependent variable and the predicted values) is the same across different values of the independent variables.
4. Normality: The errors are normally distributed. This means that the distribution of the residuals follows a bell-shaped curve.
To assess whether these assumptions are met, we can perform various diagnostic tests and evaluate the model's performance.
One common diagnostic test is the residual plot, which plots the residuals against the predicted values of the dependent variable. If the assumptions are met, the residuals should be randomly scattered around zero with no discernible pattern. Any patterns in the residual plot may indicate violations of the assumptions.
Another test is the normality test, which assesses whether the residuals follow a normal distribution. This can be done using statistical tests such as the Shapiro-Wilk test or by visually inspecting a histogram or a Q-Q plot of the residuals.
Let's say we have performed a multiple linear regression analysis to model the relationship between a person's income and their education level, work experience, and age. We have obtained the following regression equation:
$$\text{Income} = 25000 + 5000(\text{Education Level}) + 1000(\text{Work Experience}) - 200(\text{Age})$$
To evaluate the model, we can examine the residual plot. If the assumptions are met, the residuals should be randomly scattered around zero with no discernible pattern. If we observe a pattern in the residual plot, such as a U-shape or a funnel shape, it may indicate a violation of the linearity assumption.
We can also perform a normality test on the residuals. If the residuals follow a normal distribution, the p-value of the normality test should be greater than a specified significance level (e.g., 0.05). If the p-value is less than the significance level, it suggests a violation of the normality assumption.
## Exercise
1. What are the assumptions of multiple linear regression?
2. How can we assess whether these assumptions are met?
3. What does a residual plot tell us about the model's performance?
### Solution
1. The assumptions of multiple linear regression are linearity, independence, homoscedasticity, and normality.
2. We can assess whether these assumptions are met by performing diagnostic tests such as residual plots and normality tests.
3. A residual plot tells us about the model's performance by showing the pattern of the residuals. If the residuals are randomly scattered around zero with no discernible pattern, it suggests that the assumptions are met. Any patterns in the residual plot may indicate violations of the assumptions.
# 7. Categorical Data Analysis
Categorical data analysis is a branch of statistics that deals with the analysis of data that can be divided into categories or groups. It is used when the dependent variable is categorical or when the independent variable is categorical and the dependent variable is continuous.
In this section, we will explore various techniques for analyzing categorical data, including the chi-square test, contingency tables, and logistic regression.
7.1. Chi-Square Test
The chi-square test is a statistical test used to determine whether there is a significant association between two categorical variables. It compares the observed frequencies in each category with the expected frequencies under the assumption of independence.
The null hypothesis of the chi-square test is that there is no association between the two variables, while the alternative hypothesis is that there is an association.
To perform a chi-square test, we calculate the chi-square statistic and compare it to the critical value from the chi-square distribution. If the calculated chi-square statistic is greater than the critical value, we reject the null hypothesis and conclude that there is a significant association between the variables.
Let's say we want to determine whether there is an association between gender and smoking status. We collect data from a sample of 200 individuals and obtain the following contingency table:
| | Non-Smoker | Smoker |
|-------|------------|--------|
| Male | 80 | 40 |
| Female| 60 | 20 |
To perform a chi-square test, we first calculate the expected frequencies under the assumption of independence. The expected frequency for each cell is calculated by multiplying the row total and column total and dividing by the grand total.
The expected frequencies for the above contingency table are:
| | Non-Smoker | Smoker |
|-------|------------|--------|
| Male | 90 | 30 |
| Female| 50 | 30 |
Next, we calculate the chi-square statistic using the formula:
$$\chi^2 = \sum \frac{(O - E)^2}{E}$$
where O is the observed frequency and E is the expected frequency.
Finally, we compare the calculated chi-square statistic to the critical value from the chi-square distribution with (r-1)(c-1) degrees of freedom, where r is the number of rows and c is the number of columns. If the calculated chi-square statistic is greater than the critical value, we reject the null hypothesis and conclude that there is a significant association between gender and smoking status.
## Exercise
1. What is the null hypothesis of the chi-square test?
2. How do we calculate the expected frequencies in a contingency table?
3. How do we interpret the results of a chi-square test?
### Solution
1. The null hypothesis of the chi-square test is that there is no association between the two categorical variables.
2. We calculate the expected frequencies in a contingency table by multiplying the row total and column total and dividing by the grand total.
3. We interpret the results of a chi-square test by comparing the calculated chi-square statistic to the critical value from the chi-square distribution. If the calculated chi-square statistic is greater than the critical value, we reject the null hypothesis and conclude that there is a significant association between the variables.
# 7.1. Chi-Square Test
The chi-square test is a statistical test used to determine whether there is a significant association between two categorical variables. It compares the observed frequencies in each category with the expected frequencies under the assumption of independence.
The null hypothesis of the chi-square test is that there is no association between the two variables, while the alternative hypothesis is that there is an association.
To perform a chi-square test, we calculate the chi-square statistic and compare it to the critical value from the chi-square distribution. If the calculated chi-square statistic is greater than the critical value, we reject the null hypothesis and conclude that there is a significant association between the variables.
Let's say we want to determine whether there is an association between gender and smoking status. We collect data from a sample of 200 individuals and obtain the following contingency table:
| | Non-Smoker | Smoker |
|-------|------------|--------|
| Male | 80 | 40 |
| Female| 60 | 20 |
To perform a chi-square test, we first calculate the expected frequencies under the assumption of independence. The expected frequency for each cell is calculated by multiplying the row total and column total and dividing by the grand total.
The expected frequencies for the above contingency table are:
| | Non-Smoker | Smoker |
|-------|------------|--------|
| Male | 90 | 30 |
| Female| 50 | 30 |
Next, we calculate the chi-square statistic using the formula:
$$\chi^2 = \sum \frac{(O - E)^2}{E}$$
where O is the observed frequency and E is the expected frequency.
For our example, the chi-square statistic is calculated as:
$$\chi^2 = \frac{(80-90)^2}{90} + \frac{(40-30)^2}{30} + \frac{(60-50)^2}{50} + \frac{(20-30)^2}{30}$$
Finally, we compare the calculated chi-square statistic to the critical value from the chi-square distribution with (r-1)(c-1) degrees of freedom, where r is the number of rows and c is the number of columns. If the calculated chi-square statistic is greater than the critical value, we reject the null hypothesis and conclude that there is a significant association between gender and smoking status.
## Exercise
1. What is the null hypothesis of the chi-square test?
2. How do we calculate the expected frequencies in a contingency table?
3. How do we interpret the results of a chi-square test?
### Solution
1. The null hypothesis of the chi-square test is that there is no association between the two categorical variables.
2. We calculate the expected frequencies in a contingency table by multiplying the row total and column total and dividing by the grand total.
3. We interpret the results of a chi-square test by comparing the calculated chi-square statistic to the critical value from the chi-square distribution. If the calculated chi-square statistic is greater than the critical value, we reject the null hypothesis and conclude that there is a significant association between the variables.
# 7.2. Contingency Tables
Contingency tables, also known as cross-tabulations or crosstabs, are used to display the relationship between two or more categorical variables. They provide a way to summarize and analyze data by showing the frequency distribution of each combination of categories.
Contingency tables are especially useful when we want to examine the relationship between two categorical variables and determine if there is an association or dependency between them. They allow us to compare the observed frequencies in each combination of categories with the expected frequencies under the assumption of independence.
To create a contingency table, we list the categories of one variable in the rows and the categories of the other variable in the columns. We then count the number of observations that fall into each combination of categories and fill in the corresponding cells.
For example, let's say we want to examine the relationship between gender and political affiliation. We collect data from a sample of 500 individuals and obtain the following information:
| | Democrat | Republican | Independent |
|-------|----------|------------|-------------|
| Male | 100 | 80 | 40 |
| Female| 120 | 60 | 100 |
In this contingency table, we can see the frequency distribution of each combination of gender and political affiliation. For example, there are 100 males who identify as Democrats, 80 males who identify as Republicans, and so on.
Contingency tables can also be used to calculate percentages or proportions within each category. This allows us to compare the distribution of one variable across the categories of another variable.
Using the same example, let's calculate the percentages of each gender within each political affiliation:
| | Democrat | Republican | Independent |
|-------|----------|------------|-------------|
| Male | 33.3% | 42.1% | 20.0% |
| Female| 40.0% | 31.6% | 66.7% |
These percentages provide additional insights into the relationship between gender and political affiliation. For example, we can see that a higher proportion of females identify as Democrats compared to males.
## Exercise
1. What are contingency tables used for?
2. How do we create a contingency table?
3. How can we calculate percentages within each category in a contingency table?
### Solution
1. Contingency tables are used to display the relationship between two or more categorical variables and determine if there is an association or dependency between them.
2. To create a contingency table, we list the categories of one variable in the rows and the categories of the other variable in the columns. We then count the number of observations that fall into each combination of categories and fill in the corresponding cells.
3. We can calculate percentages within each category in a contingency table by dividing the frequency in each cell by the total frequency in that category and multiplying by 100. This allows us to compare the distribution of one variable across the categories of another variable.
# 7.3. Logistic Regression
Logistic regression is a statistical method used to model the relationship between a binary dependent variable and one or more independent variables. It is commonly used when the dependent variable is categorical and has only two possible outcomes, such as "yes" or "no", "success" or "failure", or "0" or "1".
The goal of logistic regression is to estimate the probability of the dependent variable belonging to a particular category based on the values of the independent variables. It allows us to assess the impact of the independent variables on the likelihood of the outcome occurring.
In logistic regression, the dependent variable is modeled using a logistic function, which is an S-shaped curve that maps any real-valued number to a value between 0 and 1. This curve represents the probability of the dependent variable being in one of the categories.
The logistic function is defined as:
$$
p(x) = \frac{1}{1 + e^{-z}}
$$
where $p(x)$ is the probability of the dependent variable being in one of the categories, $e$ is the base of the natural logarithm, and $z$ is a linear combination of the independent variables:
$$
z = \beta_0 + \beta_1x_1 + \beta_2x_2 + \ldots + \beta_nx_n
$$
Here, $\beta_0, \beta_1, \ldots, \beta_n$ are the coefficients that represent the effect of each independent variable on the log-odds of the dependent variable.
Let's say we want to predict whether a student will pass or fail an exam based on their study hours and attendance. We collect data from a sample of 100 students and obtain the following information:
| Study Hours | Attendance | Pass/Fail |
|-------------|------------|-----------|
| 5 | Yes | Pass |
| 3 | No | Fail |
| 4 | Yes | Pass |
| 6 | Yes | Pass |
| 2 | No | Fail |
| 7 | Yes | Pass |
| 4 | No | Fail |
| 5 | Yes | Pass |
To perform logistic regression, we would use the study hours and attendance as independent variables and the pass/fail outcome as the dependent variable. The logistic regression model would estimate the coefficients $\beta_0, \beta_1,$ and $\beta_2$ to predict the probability of passing the exam based on the values of the study hours and attendance.
## Exercise
1. What is the goal of logistic regression?
2. How is the dependent variable modeled in logistic regression?
3. What are the coefficients in logistic regression?
### Solution
1. The goal of logistic regression is to estimate the probability of the dependent variable belonging to a particular category based on the values of the independent variables.
2. The dependent variable is modeled using a logistic function, which is an S-shaped curve that represents the probability of the dependent variable being in one of the categories.
3. The coefficients in logistic regression represent the effect of each independent variable on the log-odds of the dependent variable.
# 8. Time Series Analysis
Time series analysis is a statistical method used to analyze and predict patterns in data that is collected over time. It is commonly used in various fields such as economics, finance, weather forecasting, and stock market analysis.
The goal of time series analysis is to understand the underlying structure and behavior of the data, identify any trends or patterns, and make predictions about future values based on historical data.
In time series analysis, the data is typically represented as a sequence of observations taken at regular intervals. These observations can be measurements of a single variable, such as temperature or stock prices, or multiple variables, such as sales data for different products.
There are several key components of time series data that need to be considered:
- Trend: The long-term movement or direction of the data. It can be increasing, decreasing, or stable.
- Seasonality: The regular and predictable patterns that occur at fixed intervals, such as daily, weekly, or yearly.
- Cyclical variations: The irregular patterns that occur over longer periods of time, usually more than a year.
- Random fluctuations: The unpredictable and random variations that cannot be explained by the trend, seasonality, or cyclical patterns.
Let's consider an example of time series data: the monthly sales of a retail store over a period of two years. Here are the sales figures:
| Month | Sales |
|-------|-------|
| Jan | 100 |
| Feb | 120 |
| Mar | 110 |
| Apr | 130 |
| May | 140 |
| Jun | 160 |
| Jul | 180 |
| Aug | 200 |
| Sep | 190 |
| Oct | 210 |
| Nov | 220 |
| Dec | 240 |
| Jan | 110 |
| Feb | 130 |
| Mar | 120 |
| Apr | 140 |
| May | 150 |
| Jun | 170 |
| Jul | 190 |
| Aug | 210 |
| Sep | 200 |
| Oct | 220 |
| Nov | 230 |
| Dec | 250 |
By analyzing this data, we can identify the following components:
- Trend: The sales figures show an increasing trend over time.
- Seasonality: There is a repeating pattern of higher sales during the summer months and lower sales during the winter months.
- Random fluctuations: The sales figures for each month vary randomly around the trend and seasonality.
## Exercise
1. What is the goal of time series analysis?
2. What are the key components of time series data?
3. Can you identify the trend, seasonality, and random fluctuations in the example sales data?
### Solution
1. The goal of time series analysis is to understand the underlying structure and behavior of the data, identify any trends or patterns, and make predictions about future values based on historical data.
2. The key components of time series data are trend, seasonality, cyclical variations, and random fluctuations.
3. In the example sales data, the trend is increasing over time, there is a seasonality pattern with higher sales during the summer and lower sales during the winter, and there are random fluctuations around the trend and seasonality.
# 8.1. Trends and Seasonality
In time series analysis, trends and seasonality are two important components that help us understand the patterns and behavior of the data.
A trend refers to the long-term movement or direction of the data. It can be increasing, decreasing, or stable. Trends can provide valuable insights into the overall behavior of the data and help us make predictions about future values.
Seasonality, on the other hand, refers to the regular and predictable patterns that occur at fixed intervals, such as daily, weekly, or yearly. These patterns can be influenced by various factors, such as weather, holidays, or economic cycles. Seasonality is often observed in data that exhibits repetitive patterns over time.
To identify trends and seasonality in time series data, we can use various techniques, such as visual inspection, statistical tests, or mathematical models.
Visual inspection involves plotting the data over time and looking for any noticeable patterns or trends. This can be done using line plots, scatter plots, or other types of graphs. By visually examining the data, we can get a sense of whether there is a trend or seasonality present.
Statistical tests, such as the Mann-Kendall test or the Sen's slope estimator, can also be used to detect trends in time series data. These tests analyze the statistical significance of the observed trends and provide quantitative measures of the trend strength.
Mathematical models, such as exponential smoothing or autoregressive integrated moving average (ARIMA), can be used to estimate and forecast trends and seasonality in time series data. These models take into account the historical patterns and relationships in the data to make predictions about future values.
Let's continue with the example of monthly sales data for a retail store. We can plot the sales figures over time to visually inspect for any trends or seasonality.
```python
import matplotlib.pyplot as plt
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
sales = [100, 120, 110, 130, 140, 160, 180, 200, 190, 210, 220, 240, 110, 130, 120, 140, 150, 170, 190, 210, 200, 220, 230, 250]
plt.plot(months, sales)
plt.xlabel('Month')
plt.ylabel('Sales')
plt.title('Monthly Sales Data')
plt.show()
```
By plotting the sales data, we can observe an increasing trend over time, as well as a repeating pattern of higher sales during the summer months and lower sales during the winter months. This indicates the presence of both trend and seasonality in the data.
## Exercise
1. What is the difference between trend and seasonality in time series data?
2. How can you visually inspect for trends and seasonality in time series data?
3. Based on the example sales data, do you observe any trend or seasonality?
### Solution
1. A trend refers to the long-term movement or direction of the data, while seasonality refers to the regular and predictable patterns that occur at fixed intervals.
2. Visual inspection involves plotting the data over time and looking for noticeable patterns or trends. By visually examining the data, we can get a sense of whether there is a trend or seasonality present.
3. Yes, based on the example sales data, we observe an increasing trend over time and a repeating pattern of higher sales during the summer months and lower sales during the winter months. This indicates the presence of both trend and seasonality in the data.
# 8.2. Autocorrelation and Moving Averages
Autocorrelation is another important concept in time series analysis. It measures the correlation between a time series and a lagged version of itself. In other words, it quantifies the relationship between a data point and its previous values.
Autocorrelation can help us identify patterns and dependencies in the data. A positive autocorrelation indicates that high values are followed by high values and low values are followed by low values. A negative autocorrelation indicates an inverse relationship, where high values are followed by low values and vice versa. A zero autocorrelation indicates no relationship between the data points.
Moving averages are commonly used to smooth out the fluctuations in time series data and identify underlying trends. A moving average calculates the average of a fixed number of consecutive data points. It helps to reduce the noise and highlight the overall pattern in the data.
To calculate the autocorrelation of a time series, we can use the autocorrelation function (ACF). The ACF measures the correlation between a data point and its lagged values at different time intervals. It produces a plot called the autocorrelation plot, which shows the correlation coefficients at different lags.
To calculate the moving average of a time series, we can use the rolling mean function. The rolling mean calculates the average of a fixed window of consecutive data points. The size of the window determines the level of smoothing. A larger window size results in a smoother curve, while a smaller window size captures more detail and fluctuations.
Let's continue with the example of monthly sales data for a retail store. We can calculate the autocorrelation and plot the autocorrelation plot to identify any significant lagged relationships.
```python
import pandas as pd
import matplotlib.pyplot as plt
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
sales = [100, 120, 110, 130, 140, 160, 180, 200, 190, 210, 220, 240, 110, 130, 120, 140, 150, 170, 190, 210, 200, 220, 230, 250]
df = pd.DataFrame({'Sales': sales}, index=months)
df['Sales'].plot()
plt.xlabel('Month')
plt.ylabel('Sales')
plt.title('Monthly Sales Data')
plt.show()
acf = pd.plotting.autocorrelation_plot(df['Sales'])
plt.xlabel('Lag')
plt.ylabel('Autocorrelation')
plt.title('Autocorrelation Plot')
plt.show()
```
By plotting the sales data, we can observe an increasing trend over time. The autocorrelation plot shows a significant positive autocorrelation at lag 1, indicating that high sales in one month are followed by high sales in the next month.
## Exercise
1. What does autocorrelation measure in time series data?
2. How can you calculate the autocorrelation of a time series?
3. What does a positive autocorrelation at lag 1 indicate?
4. Based on the example sales data, do you observe any significant lagged relationships?
### Solution
1. Autocorrelation measures the correlation between a time series and a lagged version of itself.
2. Autocorrelation can be calculated using the autocorrelation function (ACF), which measures the correlation between a data point and its lagged values at different time intervals.
3. A positive autocorrelation at lag 1 indicates that high sales in one month are followed by high sales in the next month.
4. Yes, based on the example sales data, we observe a significant positive autocorrelation at lag 1, indicating a lagged relationship between consecutive months' sales.
# 8.3. Forecasting Methods
Forecasting is an important aspect of time series analysis. It involves predicting future values of a time series based on its historical patterns and trends. Forecasting can be useful in various fields, such as finance, economics, and sales, to make informed decisions and plan for the future.
There are several methods for forecasting time series data. In this section, we will discuss two commonly used methods: the moving average method and the exponential smoothing method.
The moving average method calculates the average of a fixed number of consecutive data points and uses it as the forecast for the next time period. The size of the moving average window determines the level of smoothing. A larger window size results in a smoother forecast, while a smaller window size captures more detail and fluctuations.
The exponential smoothing method, on the other hand, assigns weights to the historical data points based on their recency. The weights decrease exponentially as the data points become older. The forecast is then calculated as a weighted average of the historical data points, with more weight given to the recent data.
Let's continue with the example of monthly sales data for a retail store. We can use the moving average method and the exponential smoothing method to forecast the sales for the next few months.
```python
import pandas as pd
from statsmodels.tsa.api import SimpleExpSmoothing
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
sales = [100, 120, 110, 130, 140, 160, 180, 200, 190, 210, 220, 240, 110, 130, 120, 140, 150, 170, 190, 210, 200, 220, 230, 250]
df = pd.DataFrame({'Sales': sales}, index=months)
# Moving Average Method
window_size = 3
df['Moving Average'] = df['Sales'].rolling(window=window_size).mean()
# Exponential Smoothing Method
model = SimpleExpSmoothing(df['Sales'])
model_fit = model.fit(smoothing_level=0.2)
df['Exponential Smoothing'] = model_fit.fittedvalues
print(df)
```
The output will show the original sales data, as well as the forecasts using the moving average method and the exponential smoothing method.
## Exercise
1. What is the moving average method used for in time series forecasting?
2. How does the size of the moving average window affect the forecast?
3. What is the exponential smoothing method used for in time series forecasting?
4. How does the exponential smoothing method assign weights to the historical data points?
### Solution
1. The moving average method is used to smooth out the fluctuations in time series data and make forecasts based on the average of a fixed number of consecutive data points.
2. The size of the moving average window determines the level of smoothing. A larger window size results in a smoother forecast, while a smaller window size captures more detail and fluctuations.
3. The exponential smoothing method is used to make forecasts based on a weighted average of the historical data points, with more weight given to the recent data.
4. The exponential smoothing method assigns weights to the historical data points based on their recency. The weights decrease exponentially as the data points become older.
# 9. Nonparametric Methods
Nonparametric methods are statistical techniques that do not make assumptions about the underlying probability distribution of the data. These methods are often used when the data does not meet the assumptions of parametric methods, such as normality or equal variances.
In this section, we will discuss three commonly used nonparametric methods: the Wilcoxon rank-sum test, the Kruskal-Wallis test, and the Mann-Whitney U test.
The Wilcoxon rank-sum test, also known as the Mann-Whitney U test, is used to compare the distributions of two independent samples. It tests the null hypothesis that the two samples come from the same population, without assuming any specific distribution. The test is based on the ranks of the observations in the combined sample.
The Kruskal-Wallis test is an extension of the Wilcoxon rank-sum test and is used to compare the distributions of three or more independent samples. It tests the null hypothesis that all the samples come from the same population, without assuming any specific distribution. The test is based on the ranks of the observations in the combined sample.
Let's consider an example to illustrate the use of nonparametric methods. Suppose we want to compare the effectiveness of three different pain relief treatments: Treatment A, Treatment B, and Treatment C. We collect pain relief scores from a sample of patients who received each treatment.
```python
import pandas as pd
from scipy.stats import ranksums, kruskal
treatment_a = [5, 7, 6, 8, 9]
treatment_b = [4, 6, 5, 7, 8]
treatment_c = [3, 5, 4, 6, 7]
df = pd.DataFrame({'Treatment A': treatment_a, 'Treatment B': treatment_b, 'Treatment C': treatment_c})
# Wilcoxon Rank-Sum Test
statistic, p_value = ranksums(treatment_a, treatment_b)
print('Wilcoxon Rank-Sum Test:')
print('Statistic:', statistic)
print('p-value:', p_value)
# Kruskal-Wallis Test
statistic, p_value = kruskal(treatment_a, treatment_b, treatment_c)
print('Kruskal-Wallis Test:')
print('Statistic:', statistic)
print('p-value:', p_value)
```
The output will show the results of the Wilcoxon rank-sum test and the Kruskal-Wallis test, including the test statistic and the p-value.
## Exercise
1. What is the Wilcoxon rank-sum test used for?
2. What is the Kruskal-Wallis test used for?
3. What are the null hypotheses tested by the Wilcoxon rank-sum test and the Kruskal-Wallis test?
4. What is the difference between the Wilcoxon rank-sum test and the Mann-Whitney U test?
### Solution
1. The Wilcoxon rank-sum test is used to compare the distributions of two independent samples, without assuming any specific distribution.
2. The Kruskal-Wallis test is used to compare the distributions of three or more independent samples, without assuming any specific distribution.
3. The null hypothesis of the Wilcoxon rank-sum test is that the two samples come from the same population. The null hypothesis of the Kruskal-Wallis test is that all the samples come from the same population.
4. The Wilcoxon rank-sum test and the Mann-Whitney U test are different names for the same test. They are used interchangeably to compare the distributions of two independent samples.
# 9.1. Wilcoxon Rank-Sum Test
The Wilcoxon rank-sum test, also known as the Mann-Whitney U test, is a nonparametric test used to compare the distributions of two independent samples. It is often used when the data does not meet the assumptions of parametric tests, such as normality or equal variances.
The test works by assigning ranks to the observations in the combined sample, without assuming any specific distribution. It then calculates the sum of ranks for each group and compares them to determine if there is a significant difference between the two samples.
The null hypothesis of the Wilcoxon rank-sum test is that the two samples come from the same population. If the p-value associated with the test is below a predetermined significance level (e.g., 0.05), we reject the null hypothesis and conclude that there is a significant difference between the two samples.
Let's consider an example to illustrate the use of the Wilcoxon rank-sum test. Suppose we want to compare the heights of male and female students in a school. We collect the heights of a sample of male students and a sample of female students.
```python
import pandas as pd
from scipy.stats import ranksums
male_heights = [170, 175, 180, 185, 190]
female_heights = [160, 165, 170, 175, 180]
df = pd.DataFrame({'Male Heights': male_heights, 'Female Heights': female_heights})
# Wilcoxon Rank-Sum Test
statistic, p_value = ranksums(male_heights, female_heights)
print('Wilcoxon Rank-Sum Test:')
print('Statistic:', statistic)
print('p-value:', p_value)
```
The output will show the results of the Wilcoxon rank-sum test, including the test statistic and the p-value.
## Exercise
1. When should you use the Wilcoxon rank-sum test?
2. What is the null hypothesis of the Wilcoxon rank-sum test?
3. What does it mean if the p-value associated with the test is below the significance level?
### Solution
1. You should use the Wilcoxon rank-sum test when you want to compare the distributions of two independent samples and the data does not meet the assumptions of parametric tests.
2. The null hypothesis of the Wilcoxon rank-sum test is that the two samples come from the same population.
3. If the p-value associated with the test is below the significance level, it means that there is a significant difference between the two samples.
# 9.2. Kruskal-Wallis Test
The Kruskal-Wallis test is a nonparametric test used to compare the distributions of three or more independent samples. It is an extension of the Wilcoxon rank-sum test and is often used when the data does not meet the assumptions of parametric tests, such as normality or equal variances.
The test works by assigning ranks to the observations in the combined sample, without assuming any specific distribution. It then calculates the sum of ranks for each group and compares them to determine if there is a significant difference between the samples.
The null hypothesis of the Kruskal-Wallis test is that the samples come from the same population. If the p-value associated with the test is below a predetermined significance level (e.g., 0.05), we reject the null hypothesis and conclude that there is a significant difference among the samples.
Let's consider an example to illustrate the use of the Kruskal-Wallis test. Suppose we want to compare the test scores of students from three different schools. We collect the test scores of a sample of students from each school.
```python
import pandas as pd
from scipy.stats import kruskal
school1_scores = [80, 85, 90, 95, 100]
school2_scores = [70, 75, 80, 85, 90]
school3_scores = [60, 65, 70, 75, 80]
df = pd.DataFrame({'School 1 Scores': school1_scores, 'School 2 Scores': school2_scores, 'School 3 Scores': school3_scores})
# Kruskal-Wallis Test
statistic, p_value = kruskal(school1_scores, school2_scores, school3_scores)
print('Kruskal-Wallis Test:')
print('Statistic:', statistic)
print('p-value:', p_value)
```
The output will show the results of the Kruskal-Wallis test, including the test statistic and the p-value.
## Exercise
1. When should you use the Kruskal-Wallis test?
2. What is the null hypothesis of the Kruskal-Wallis test?
3. What does it mean if the p-value associated with the test is below the significance level?
### Solution
1. You should use the Kruskal-Wallis test when you want to compare the distributions of three or more independent samples and the data does not meet the assumptions of parametric tests.
2. The null hypothesis of the Kruskal-Wallis test is that the samples come from the same population.
3. If the p-value associated with the test is below the significance level, it means that there is a significant difference among the samples.
# 9.3. Mann-Whitney U Test
The Mann-Whitney U test, also known as the Wilcoxon rank-sum test, is a nonparametric test used to compare the distributions of two independent samples. It is often used when the data does not meet the assumptions of parametric tests, such as normality or equal variances.
The test works by assigning ranks to the observations in the combined sample, without assuming any specific distribution. It then calculates the sum of ranks for each group and compares them to determine if there is a significant difference between the samples.
The null hypothesis of the Mann-Whitney U test is that the samples come from the same population. If the p-value associated with the test is below a predetermined significance level (e.g., 0.05), we reject the null hypothesis and conclude that there is a significant difference between the samples.
Let's consider an example to illustrate the use of the Mann-Whitney U test. Suppose we want to compare the test scores of students from two different schools. We collect the test scores of a sample of students from each school.
```python
import pandas as pd
from scipy.stats import mannwhitneyu
school1_scores = [80, 85, 90, 95, 100]
school2_scores = [70, 75, 80, 85, 90]
df = pd.DataFrame({'School 1 Scores': school1_scores, 'School 2 Scores': school2_scores})
# Mann-Whitney U Test
statistic, p_value = mannwhitneyu(school1_scores, school2_scores)
print('Mann-Whitney U Test:')
print('Statistic:', statistic)
print('p-value:', p_value)
```
The output will show the results of the Mann-Whitney U test, including the test statistic and the p-value.
## Exercise
1. When should you use the Mann-Whitney U test?
2. What is the null hypothesis of the Mann-Whitney U test?
3. What does it mean if the p-value associated with the test is below the significance level?
### Solution
1. You should use the Mann-Whitney U test when you want to compare the distributions of two independent samples and the data does not meet the assumptions of parametric tests.
2. The null hypothesis of the Mann-Whitney U test is that the samples come from the same population.
3. If the p-value associated with the test is below the significance level, it means that there is a significant difference between the samples.
# 10. Data Visualization and Communication
Data visualization is an essential tool for communicating insights and findings from data analysis. It allows us to present complex information in a clear and concise manner, making it easier for others to understand and interpret the data.
Effective data visualization follows certain principles that help to convey the intended message accurately. These principles include:
1. Simplify: Remove unnecessary clutter and focus on the most important information. Use clear and concise labels, titles, and legends.
2. Use appropriate graphs: Choose the right type of graph that best represents the data and the relationships you want to show. For example, use bar charts for comparing categories, line charts for showing trends over time, and scatter plots for displaying relationships between variables.
3. Highlight key findings: Use color, size, or annotations to draw attention to important findings or patterns in the data. This helps the audience quickly grasp the main message.
4. Provide context: Include relevant context and background information to help the audience understand the data and its implications. This can be done through captions, annotations, or additional text.
5. Use consistent scales and axes: Ensure that the scales and axes in your visualizations are consistent and clearly labeled. This helps the audience accurately interpret the data and make comparisons.
Let's say you have conducted a survey to understand customer satisfaction with a new product. You have collected data on various aspects of the product and want to present the findings to your team. Here's an example of how you can visualize the data:
```python
import matplotlib.pyplot as plt
# Data
categories = ['Price', 'Quality', 'Ease of Use', 'Customer Service']
scores = [4.2, 4.5, 3.8, 4.0]
# Bar chart
plt.bar(categories, scores)
plt.xlabel('Aspects')
plt.ylabel('Average Score')
plt.title('Customer Satisfaction with New Product')
plt.show()
```
In this example, we use a bar chart to compare the average scores for different aspects of the product. The x-axis represents the aspects, and the y-axis represents the average score. The chart provides a clear visual representation of the customer satisfaction levels for each aspect.
## Exercise
1. What are some principles of effective data visualization?
2. Why is it important to choose the right type of graph for your data?
3. How can you highlight key findings in a data visualization?
### Solution
1. Some principles of effective data visualization include simplifying the visual, using appropriate graphs, highlighting key findings, providing context, and using consistent scales and axes.
2. It is important to choose the right type of graph for your data because different types of graphs are suited for different types of data and relationships. Using the right graph ensures that the data is accurately represented and the intended message is conveyed.
3. Key findings can be highlighted in a data visualization using color, size, or annotations. By drawing attention to important findings, the audience can quickly grasp the main message of the visualization.
# 10.1. Principles of Effective Data Visualization
Effective data visualization follows certain principles that help to convey the intended message accurately. These principles include:
1. Simplify: Remove unnecessary clutter and focus on the most important information. Use clear and concise labels, titles, and legends.
2. Use appropriate graphs: Choose the right type of graph that best represents the data and the relationships you want to show. For example, use bar charts for comparing categories, line charts for showing trends over time, and scatter plots for displaying relationships between variables.
3. Highlight key findings: Use color, size, or annotations to draw attention to important findings or patterns in the data. This helps the audience quickly grasp the main message.
4. Provide context: Include relevant context and background information to help the audience understand the data and its implications. This can be done through captions, annotations, or additional text.
5. Use consistent scales and axes: Ensure that the scales and axes in your visualizations are consistent and clearly labeled. This helps the audience accurately interpret the data and make comparisons.
Let's say you have conducted a survey to understand customer satisfaction with a new product. You have collected data on various aspects of the product and want to present the findings to your team. Here's an example of how you can visualize the data:
```python
import matplotlib.pyplot as plt
# Data
categories = ['Price', 'Quality', 'Ease of Use', 'Customer Service']
scores = [4.2, 4.5, 3.8, 4.0]
# Bar chart
plt.bar(categories, scores)
plt.xlabel('Aspects')
plt.ylabel('Average Score')
plt.title('Customer Satisfaction with New Product')
plt.show()
```
In this example, we use a bar chart to compare the average scores for different aspects of the product. The x-axis represents the aspects, and the y-axis represents the average score. The chart provides a clear visual representation of the customer satisfaction levels for each aspect.
## Exercise
1. What are some principles of effective data visualization?
2. Why is it important to choose the right type of graph for your data?
3. How can you highlight key findings in a data visualization?
### Solution
1. Some principles of effective data visualization include simplifying the visual, using appropriate graphs, highlighting key findings, providing context, and using consistent scales and axes.
2. It is important to choose the right type of graph for your data because different types of graphs are suited for different types of data and relationships. Using the right graph ensures that the data is accurately represented and the intended message is conveyed.
3. Key findings can be highlighted in a data visualization using color, size, or annotations. By drawing attention to important findings, the audience can quickly grasp the main message of the visualization.
# 10.2. Choosing the Right Graph for the Data
Choosing the right type of graph for your data is essential for effectively communicating your message. Different types of graphs are suited for different types of data and relationships. Here are some common types of graphs and when to use them:
1. Bar chart: Use a bar chart to compare categories or show discrete data. Each category is represented by a bar, and the height of the bar represents the value.
2. Line chart: Use a line chart to show trends over time or continuous data. Each data point is connected by a line, allowing you to see the overall pattern.
3. Scatter plot: Use a scatter plot to display the relationship between two continuous variables. Each data point is represented by a dot, and the position of the dot shows the values of the variables.
4. Pie chart: Use a pie chart to show the proportion of different categories in a whole. Each category is represented by a slice of the pie, and the size of the slice represents the proportion.
5. Histogram: Use a histogram to show the distribution of continuous data. The data is divided into bins, and the height of each bar represents the frequency or proportion of data in that bin.
6. Box plot: Use a box plot to show the distribution of continuous data and identify outliers. The box represents the interquartile range, the line inside the box represents the median, and the whiskers show the range of the data.
Let's say you have collected data on the sales of different products over a year. You want to visualize the sales trends for each product. Here's an example of how you can choose the right graph:
```python
import matplotlib.pyplot as plt
# Data
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun']
product1_sales = [100, 120, 150, 130, 140, 160]
product2_sales = [80, 90, 100, 110, 120, 130]
# Line chart
plt.plot(months, product1_sales, label='Product 1')
plt.plot(months, product2_sales, label='Product 2')
plt.xlabel('Months')
plt.ylabel('Sales')
plt.title('Sales Trends')
plt.legend()
plt.show()
```
In this example, we use a line chart to show the sales trends for two products over the months. Each product is represented by a line, and the x-axis represents the months while the y-axis represents the sales.
## Exercise
1. When should you use a bar chart?
2. What is a scatter plot used for?
3. How can you show the distribution of continuous data using a graph?
### Solution
1. A bar chart should be used to compare categories or show discrete data.
2. A scatter plot is used to display the relationship between two continuous variables.
3. The distribution of continuous data can be shown using a histogram or a box plot.
# 10.3. Presenting and Communicating Results
Presenting and communicating your data analysis results is an important step in the data analysis process. It allows you to effectively convey your findings and insights to your audience. Here are some tips for presenting and communicating your results:
1. Know your audience: Understand who your audience is and tailor your presentation to their level of knowledge and expertise. Use language and visuals that are appropriate for your audience.
2. Keep it clear and concise: Present your results in a clear and concise manner. Avoid jargon and unnecessary technical details. Focus on the key findings and insights that are relevant to your audience.
3. Use visuals: Visuals such as graphs, charts, and tables can help convey your results more effectively. Choose the right type of visual for your data and use it to highlight key points. Make sure your visuals are easy to understand and interpret.
4. Provide context: Provide context for your results by explaining the background and objectives of your analysis. Help your audience understand why your findings are important and how they relate to the broader context.
5. Tell a story: Structure your presentation in a way that tells a story. Start with an introduction to set the stage, present your findings and insights, and conclude with a summary and key takeaways. Use a narrative arc to engage your audience and keep them interested.
6. Be prepared for questions: Anticipate questions that your audience may have and be prepared to answer them. Make sure you understand your analysis and results thoroughly so that you can provide clear and confident responses.
Let's say you conducted a survey to analyze customer satisfaction with a new product. After analyzing the data, you found that 80% of customers were satisfied with the product. Here's an example of how you can present and communicate this result:
```
Title: Customer Satisfaction Survey Results
Introduction:
- Briefly explain the purpose of the survey and the importance of customer satisfaction.
Key Finding:
- 80% of customers surveyed reported being satisfied with the new product.
Visual:
- Include a bar chart or pie chart showing the distribution of customer satisfaction ratings.
Explanation:
- Provide additional details about the survey methodology and sample size.
- Explain any limitations or biases in the survey data.
Implications:
- Discuss the implications of the high customer satisfaction rating for the product and the company.
- Highlight the potential impact on customer loyalty and future sales.
Conclusion:
- Summarize the key finding and its implications.
- Encourage further discussion and questions from the audience.
```
By following this structure, you can effectively present and communicate your result to your audience, providing them with a clear understanding of the customer satisfaction level with the new product.
## Exercise
Imagine you conducted a study to compare the effectiveness of two different advertising campaigns. After analyzing the data, you found that Campaign A resulted in a 10% increase in sales, while Campaign B resulted in a 5% increase in sales. How would you present and communicate these results to your audience?
### Solution
Title: Advertising Campaign Effectiveness Study Results
Introduction:
- Explain the objective of the study and the importance of measuring the effectiveness of advertising campaigns.
Key Findings:
- Campaign A resulted in a 10% increase in sales.
- Campaign B resulted in a 5% increase in sales.
Visual:
- Include a bar chart or line chart showing the sales increase for each campaign.
Explanation:
- Provide details about the study design, sample size, and statistical analysis used.
- Explain any potential biases or limitations in the data.
Implications:
- Discuss the implications of the higher sales increase for Campaign A compared to Campaign B.
- Highlight the potential impact on the company's marketing strategy and future advertising campaigns.
Conclusion:
- Summarize the key findings and their implications.
- Encourage further discussion and questions from the audience.
By presenting and communicating the results in this way, you can effectively convey the differences in the effectiveness of the two advertising campaigns to your audience.
# 11. Conclusion and Next Steps
Congratulations! You have completed this textbook on ModernDive. Throughout this course, you have learned a wide range of topics in depth, from checking for membership in Python dictionaries to the history of the Roanoke colony.
You have engaged with challenging material, using specific and practical examples to deepen your understanding. You have explored concepts like measures of central tendency, hypothesis testing, regression analysis, and data visualization.
Now that you have completed this textbook, you may be wondering what's next. The field of data analysis and statistics is vast, and there is always more to learn. Here are some next steps you can take:
1. Advanced Topics in Data Analysis: Dive deeper into specific areas of data analysis that interest you. Explore topics like machine learning, time series analysis, or spatial analysis. There are many resources available, including books, online courses, and tutorials.
2. Applications of Data Analysis in Different Fields: Apply your data analysis skills to real-world problems in various fields. Whether it's healthcare, finance, marketing, or social sciences, data analysis is a valuable tool for making informed decisions and solving complex problems.
3. Resources for Further Learning: Expand your knowledge by exploring additional resources. There are many books, websites, and online communities dedicated to data analysis and statistics. Join forums, attend webinars, and participate in data analysis challenges to continue learning and growing.
Remember, learning is a lifelong journey. The more you practice and apply your skills, the more confident and proficient you will become. Keep exploring, asking questions, and seeking opportunities to apply your knowledge. Good luck on your data analysis journey!
# 11.1. Advanced Topics in Data Analysis
Now that you have a solid foundation in data analysis, you may be interested in exploring more advanced topics. Here are a few areas you can delve into:
1. Machine Learning: Machine learning is a branch of artificial intelligence that focuses on developing algorithms and statistical models that enable computers to learn and make predictions or decisions without being explicitly programmed. It is widely used in various fields, including finance, healthcare, marketing, and more. Dive into the world of machine learning to understand concepts like supervised and unsupervised learning, regression, classification, and clustering.
2. Time Series Analysis: Time series analysis is a statistical technique that deals with data that is collected over time. It is used to identify patterns, trends, and seasonality in data and make predictions about future values. Explore methods like autocorrelation, moving averages, and forecasting to analyze and interpret time series data.
3. Spatial Analysis: Spatial analysis involves analyzing and interpreting data that is tied to specific geographic locations. It is used in fields like urban planning, environmental science, and epidemiology. Learn about spatial data structures, spatial statistics, and geographic information systems (GIS) to analyze and visualize spatial data effectively.
4. Big Data Analytics: With the exponential growth of data in today's world, analyzing and extracting insights from large datasets has become crucial. Big data analytics involves processing, analyzing, and interpreting massive volumes of data to uncover patterns, trends, and insights. Explore tools like Hadoop, Spark, and NoSQL databases to handle big data efficiently.
These are just a few examples of advanced topics in data analysis. The field is constantly evolving, and there are always new techniques and tools to explore. Keep learning, experimenting, and applying your knowledge to stay at the forefront of data analysis.
# 11.2. Applications of Data Analysis in Different Fields
Data analysis has applications in a wide range of fields, helping professionals make informed decisions and solve complex problems. Here are a few examples of how data analysis is used in different industries:
1. Healthcare: Data analysis plays a crucial role in healthcare, from analyzing patient data to predicting disease outbreaks. It helps healthcare professionals identify patterns, assess treatment effectiveness, and improve patient outcomes. Data analysis techniques like regression analysis, survival analysis, and machine learning are used to extract insights from medical data.
2. Finance: The finance industry relies heavily on data analysis for risk assessment, investment strategies, and fraud detection. Financial analysts use statistical models, time series analysis, and machine learning algorithms to analyze market trends, predict stock prices, and make informed investment decisions.
3. Marketing: Data analysis is essential for understanding consumer behavior, optimizing marketing campaigns, and measuring the effectiveness of marketing strategies. Marketers use techniques like customer segmentation, A/B testing, and predictive modeling to target the right audience, personalize marketing messages, and drive business growth.
4. Social Sciences: Data analysis is widely used in social sciences to study human behavior, demographics, and societal trends. Researchers analyze survey data, conduct experiments, and use statistical methods to draw conclusions and make evidence-based recommendations.
These are just a few examples, and data analysis is applicable in many other fields, including education, environmental science, sports analytics, and more. As a data analyst, you have the opportunity to apply your skills and make a meaningful impact in various industries.
# 11.3. Resources for Further Learning
As you continue your data analysis journey, it's important to have access to resources that can support your learning and help you stay updated with the latest trends and techniques. Here are some resources you can explore:
1. Books: There are many excellent books on data analysis and statistics that can deepen your understanding and provide practical insights. Some popular titles include "The Elements of Statistical Learning" by Trevor Hastie, Robert Tibshirani, and Jerome Friedman, "Python for Data Analysis" by Wes McKinney, and "Data Science for Business" by Foster Provost and Tom Fawcett.
2. Online Courses: Online learning platforms like Coursera, edX, and Udemy offer a wide range of data analysis courses taught by experts in the field. These courses provide structured learning experiences and often include hands-on projects to apply your skills.
3. Data Visualization Tools: Tools like Tableau, Power BI, and Python libraries like Matplotlib and Seaborn can help you create compelling visualizations and communicate your findings effectively. Explore tutorials and documentation to learn how to use these tools efficiently.
4. Online Communities: Joining online communities and forums dedicated to data analysis and statistics can provide opportunities for networking, collaboration, and learning from others. Platforms like Kaggle, Stack Overflow, and Reddit have active communities where you can ask questions, share insights, and participate in data analysis challenges.
5. Webinars and Conferences: Stay updated with the latest trends and advancements in data analysis by attending webinars and conferences. Many organizations and industry leaders host webinars on specific topics, and conferences provide opportunities to learn from experts, network with peers, and discover new tools and techniques.
Remember, learning is a continuous process, and staying curious and open to new ideas is key. Embrace challenges, seek out opportunities to apply your skills, and never stop exploring. Good luck on your data analysis journey! | Textbooks |
Heteroskedasticity-consistent standard errors
The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors (or simply robust standard errors), Eicker–Huber–White standard errors (also Huber–White standard errors or White standard errors),[1] to recognize the contributions of Friedhelm Eicker,[2] Peter J. Huber,[3] and Halbert White.[4]
In regression and time-series modelling, basic forms of models make use of the assumption that the errors or disturbances ui have the same variance across all observation points. When this is not the case, the errors are said to be heteroskedastic, or to have heteroskedasticity, and this behaviour will be reflected in the residuals ${\widehat {u}}_{i}$ estimated from a fitted model. Heteroskedasticity-consistent standard errors are used to allow the fitting of a model that does contain heteroskedastic residuals. The first such approach was proposed by Huber (1967), and further improved procedures have been produced since for cross-sectional data, time-series data and GARCH estimation.
Heteroskedasticity-consistent standard errors that differ from classical standard errors may indicate model misspecification. Substituting heteroskedasticity-consistent standard errors does not resolve this misspecification, which may lead to bias in the coefficients. In most situations, the problem should be found and fixed.[5] Other types of standard error adjustments, such as clustered standard errors or HAC standard errors, may be considered as extensions to HC standard errors.
History
Heteroskedasticity-consistent standard errors are introduced by Friedhelm Eicker,[6][7] and popularized in econometrics by Halbert White.
Problem
Consider the linear regression model for the scalar $y$.
$y=\mathbf {x} ^{\top }{\boldsymbol {\beta }}+\varepsilon ,\,$
where $\mathbf {x} $ is a k x 1 column vector of explanatory variables (features), ${\boldsymbol {\beta }}$ is a k × 1 column vector of parameters to be estimated, and $\varepsilon $ is the residual error.
The ordinary least squares (OLS) estimator is
${\widehat {\boldsymbol {\beta }}}_{\mathrm {OLS} }=(\mathbf {X} ^{\top }\mathbf {X} )^{-1}\mathbf {X} ^{\top }\mathbf {y} .\,$
where $\mathbf {y} $ is a vector of observations $y_{i}$, and $\mathbf {X} $ denotes the matrix of stacked $\mathbf {x} _{i}$ values observed in the data.
If the sample errors have equal variance $\sigma ^{2}$ and are uncorrelated, then the least-squares estimate of ${\boldsymbol {\beta }}$ is BLUE (best linear unbiased estimator), and its variance is estimated with
${\hat {\mathbb {V} }}\left[{\widehat {\boldsymbol {\beta }}}_{\mathrm {OLS} }\right]=s^{2}(\mathbf {X} ^{\top }\mathbf {X} )^{-1},\quad s^{2}={\frac {\sum _{i}{\widehat {\varepsilon }}_{i}^{2}}{n-k}}$
where ${\widehat {\varepsilon }}_{i}=y_{i}-\mathbf {x} _{i}^{\top }{\widehat {\boldsymbol {\beta }}}_{\mathrm {OLS} }$ are the regression residuals.
When the error terms do not have constant variance (i.e., the assumption of $\mathbb {E} [\mathbf {u} \mathbf {u} ^{\top }]=\sigma ^{2}\mathbf {I} _{n}$ is untrue), the OLS estimator loses its desirable properties. The formula for variance now cannot be simplified:
$\mathbb {V} \left[{\widehat {\boldsymbol {\beta }}}_{\mathrm {OLS} }\right]=\mathbb {V} {\big [}(\mathbf {X} ^{\top }\mathbf {X} )^{-1}\mathbf {X} ^{\top }\mathbf {y} {\big ]}=(\mathbf {X} ^{\top }\mathbf {X} )^{-1}\mathbf {X} ^{\top }\mathbf {\Sigma } \mathbf {X} (\mathbf {X} ^{\top }\mathbf {X} )^{-1}$
where $\mathbf {\Sigma } =\mathbb {V} [\mathbf {u} ].$
While the OLS point estimator remains unbiased, it is not "best" in the sense of having minimum mean square error, and the OLS variance estimator ${\hat {\mathbb {V} }}\left[{\widehat {\boldsymbol {\beta }}}_{\mathrm {OLS} }\right]$ does not provide a consistent estimate of the variance of the OLS estimates.
For any non-linear model (for instance logit and probit models), however, heteroskedasticity has more severe consequences: the maximum likelihood estimates of the parameters will be biased (in an unknown direction), as well as inconsistent (unless the likelihood function is modified to correctly take into account the precise form of heteroskedasticity).[8][9] As pointed out by Greene, “simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption.”[10]
Solution
If the regression errors $\varepsilon _{i}$ are independent, but have distinct variances $\sigma _{i}^{2}$, then $\mathbf {\Sigma } =\operatorname {diag} (\sigma _{1}^{2},\ldots ,\sigma _{n}^{2})$ which can be estimated with ${\widehat {\sigma }}_{i}^{2}={\widehat {\varepsilon }}_{i}^{2}$. This provides White's (1980) estimator, often referred to as HCE (heteroskedasticity-consistent estimator):
${\begin{aligned}{\hat {\mathbb {V} }}_{\text{HCE}}{\big [}{\widehat {\boldsymbol {\beta }}}_{\text{OLS}}{\big ]}&={\frac {1}{n}}{\bigg (}{\frac {1}{n}}\sum _{i}\mathbf {x} _{i}\mathbf {x} _{i}^{\top }{\bigg )}^{-1}{\bigg (}{\frac {1}{n}}\sum _{i}\mathbf {x} _{i}\mathbf {x} _{i}^{\top }{\widehat {\varepsilon }}_{i}^{2}{\bigg )}{\bigg (}{\frac {1}{n}}\sum _{i}\mathbf {x} _{i}\mathbf {x} _{i}^{\top }{\bigg )}^{-1}\\&=(\mathbf {X} ^{\top }\mathbf {X} )^{-1}(\mathbf {X} ^{\top }\operatorname {diag} ({\widehat {\varepsilon }}_{1}^{2},\ldots ,{\widehat {\varepsilon }}_{n}^{2})\mathbf {X} )(\mathbf {X} ^{\top }\mathbf {X} )^{-1},\end{aligned}}$
where as above $\mathbf {X} $ denotes the matrix of stacked $\mathbf {x} _{i}^{\top }$ values from the data. The estimator can be derived in terms of the generalized method of moments (GMM).
Also often discussed in the literature (including White's paper) is the covariance matrix ${\widehat {\mathbf {\Omega } }}_{n}$ of the ${\sqrt {n}}$-consistent limiting distribution:
${\sqrt {n}}({\widehat {\boldsymbol {\beta }}}_{n}-{\boldsymbol {\beta }})\,\xrightarrow {d} \,{\mathcal {N}}(\mathbf {0} ,\mathbf {\Omega } ),$
where
$\mathbf {\Omega } =\mathbb {E} [\mathbf {X} \mathbf {X} ^{\top }]^{-1}\mathbb {V} [\mathbf {X} {\boldsymbol {\varepsilon }}]\operatorname {\mathbb {E} } [\mathbf {X} \mathbf {X} ^{\top }]^{-1},$
and
${\begin{aligned}{\widehat {\mathbf {\Omega } }}_{n}&={\bigg (}{\frac {1}{n}}\sum _{i}\mathbf {x} _{i}\mathbf {x} _{i}^{\top }{\bigg )}^{-1}{\bigg (}{\frac {1}{n}}\sum _{i}\mathbf {x} _{i}\mathbf {x} _{i}^{\top }{\widehat {\varepsilon }}_{i}^{2}{\bigg )}{\bigg (}{\frac {1}{n}}\sum _{i}\mathbf {x} _{i}\mathbf {x} _{i}^{\top }{\bigg )}^{-1}\\&=n(\mathbf {X} ^{\top }\mathbf {X} )^{-1}(\mathbf {X} ^{\top }\operatorname {diag} ({\widehat {\varepsilon }}_{1}^{2},\ldots ,{\widehat {\varepsilon }}_{n}^{2})\mathbf {X} )(\mathbf {X} ^{\top }\mathbf {X} )^{-1}\end{aligned}}$
Thus,
${\widehat {\mathbf {\Omega } }}_{n}=n\cdot {\hat {\mathbb {V} }}_{\text{HCE}}[{\widehat {\boldsymbol {\beta }}}_{\text{OLS}}]$
and
${\widehat {\mathbb {V} }}[\mathbf {X} {\boldsymbol {\varepsilon }}]={\frac {1}{n}}\sum _{i}\mathbf {x} _{i}\mathbf {x} _{i}^{\top }{\widehat {\varepsilon }}_{i}^{2}={\frac {1}{n}}\mathbf {X} ^{\top }\operatorname {diag} ({\widehat {\varepsilon }}_{1}^{2},\ldots ,{\widehat {\varepsilon }}_{n}^{2})\mathbf {X} .$
Precisely which covariance matrix is of concern is a matter of context.
Alternative estimators have been proposed in MacKinnon & White (1985) that correct for unequal variances of regression residuals due to different leverage.[11] Unlike the asymptotic White's estimator, their estimators are unbiased when the data are homoscedastic.
Of the four widely available different options, often denoted as HC0-HC3, the HC3 specification appears to work best, with tests relying on the HC3 estimator featuring better power and closer proximity to the targeted size, especially in small samples. The larger the sample, the smaller the difference between the different estimators.[12]
An alternative to explicitly modelling the heteroskedasticity is using a resampling method such as the Wild Bootstrap. Given that the studentized Bootstrap, which standardizes the resampled statistic by its standard error, yields an asymptotic refinement,[13] heteroskedasticity-robust standard errors remain nevertheless useful.
Instead of accounting for the heteroskedastic errors, most linear models can be transformed to feature homoskedastic error terms (unless the error term is heteroskedastic by construction, e.g. in a Linear probability model). One way to do this is using Weighted least squares, which also features improved efficiency properties.
See also
• Delta method
• Generalized least squares
• Generalized estimating equations
• Weighted least squares, an alternative formulation
• White test — a test for whether heteroskedasticity is present.
• Newey–West estimator
• Quasi-maximum likelihood estimate
Software
• EViews: EViews version 8 offers three different methods for robust least squares: M-estimation (Huber, 1973), S-estimation (Rousseeuw and Yohai, 1984), and MM-estimation (Yohai 1987).[14]
• Julia: the CovarianceMatrices package offers several methods for heteroskedastic robust variance covariance matrices.[15]
• MATLAB: See the hac function in the Econometrics toolbox.[16]
• Python: The Statsmodel package offers various robust standard error estimates, see statsmodels.regression.linear_model.RegressionResults for further descriptions
• R: the vcovHC() command from the sandwich package.[17][18]
• RATS: robusterrors option is available in many of the regression and optimization commands (linreg, nlls, etc.).
• Stata: robust option applicable in many pseudo-likelihood based procedures.[19]
• Gretl: the option --robust to several estimation commands (such as ols) in the context of a cross-sectional dataset produces robust standard errors.[20]
References
1. Kleiber, C.; Zeileis, A. (2006). "Applied Econometrics with R" (PDF). UseR-2006 conference. Archived from the original (PDF) on April 22, 2007.
2. Eicker, Friedhelm (1967). "Limit Theorems for Regression with Unequal and Dependent Errors". Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. Vol. 5. pp. 59–82. MR 0214223. Zbl 0217.51201.
3. Huber, Peter J. (1967). "The behavior of maximum likelihood estimates under nonstandard conditions". Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. Vol. 5. pp. 221–233. MR 0216620. Zbl 0212.21504.
4. White, Halbert (1980). "A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity". Econometrica. 48 (4): 817–838. CiteSeerX 10.1.1.11.7646. doi:10.2307/1912934. JSTOR 1912934. MR 0575027.
5. King, Gary; Roberts, Margaret E. (2015). "How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It". Political Analysis. 23 (2): 159–179. doi:10.1093/pan/mpu015. ISSN 1047-1987.
6. Eicker, F. (1963). "Asymptotic Normality and Consistency of the Least Squares Estimators for Families of Linear Regressions". The Annals of Mathematical Statistics. 34 (2): 447–456. doi:10.1214/aoms/1177704156.
7. Eicker, Friedhelm (January 1967). "Limit theorems for regressions with unequal and dependent errors". Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics. 5 (1): 59–83.
8. Giles, Dave (May 8, 2013). "Robust Standard Errors for Nonlinear Models". Econometrics Beat.
9. Guggisberg, Michael (2019). "Misspecified Discrete Choice Models and Huber-White Standard Errors". Journal of Econometric Methods. 8 (1). doi:10.1515/jem-2016-0002.
10. Greene, William H. (2012). Econometric Analysis (Seventh ed.). Boston: Pearson Education. pp. 692–693. ISBN 978-0-273-75356-8.
11. MacKinnon, James G.; White, Halbert (1985). "Some Heteroskedastic-Consistent Covariance Matrix Estimators with Improved Finite Sample Properties". Journal of Econometrics. 29 (3): 305–325. doi:10.1016/0304-4076(85)90158-7. hdl:10419/189084.
12. Long, J. Scott; Ervin, Laurie H. (2000). "Using Heteroscedasticity Consistent Standard Errors in the Linear Regression Model". The American Statistician. 54 (3): 217–224. doi:10.2307/2685594. ISSN 0003-1305.
13. C., Davison, Anthony (2010). Bootstrap methods and their application. Cambridge Univ. Press. ISBN 978-0-521-57391-7. OCLC 740960962.{{cite book}}: CS1 maint: multiple names: authors list (link)
14. "EViews 8 Robust Regression".
15. CovarianceMatrices: Robust Covariance Matrix Estimators
16. "Heteroskedasticity and autocorrelation consistent covariance estimators". Econometrics Toolbox.
17. sandwich: Robust Covariance Matrix Estimators
18. Kleiber, Christian; Zeileis, Achim (2008). Applied Econometrics with R. New York: Springer. pp. 106–110. ISBN 978-0-387-77316-2.
19. See online help for _robust option and regress command.
20. "Robust covariance matrix estimation" (PDF). Gretl User's Guide, chapter 19.
Further reading
• Freedman, David A. (2006). "On The So-Called 'Huber Sandwich Estimator' and 'Robust Standard Errors'". The American Statistician. 60 (4): 299–302. doi:10.1198/000313006X152207. S2CID 6222876.
• Hardin, James W. (2003). "The Sandwich Estimate of Variance". In Fomby, Thomas B.; Hill, R. Carter (eds.). Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later. Amsterdam: Elsevier. pp. 45–74. ISBN 0-7623-1075-8.
• Hayes, Andrew F.; Cai, Li (2007). "Using heteroskedasticity-consistent standard error estimators in OLS regression: An introduction and software implementation". Behavior Research Methods. 39 (4): 709–722. doi:10.3758/BF03192961. PMID 18183883.
• King, Gary; Roberts, Margaret E. (2015). "How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It". Political Analysis. 23 (2): 159–179. doi:10.1093/pan/mpu015.
• Wooldridge, Jeffrey M. (2009). "Heteroskedasticity-Robust Inference after OLS Estimation". Introductory Econometrics : A Modern Approach (Fourth ed.). Mason: South-Western. pp. 265–271. ISBN 978-0-324-66054-8.
• Buja, Andreas, et al. "Models as approximations-a conspiracy of random regressors and model deviations against classical inference in regression." Statistical Science (2015): 1. pdf
| Wikipedia |
Volume 19 Supplement 6
Proceedings of the 15th Annual Research in Computational Molecular Biology (RECOMB) Comparative Genomics Satellite Workshop: bioinformatics
Research | Open | Published: 08 May 2018
Computing the family-free DCJ similarity
Diego P. Rubert1,
Edna A. Hoshino1,
Marília D. V. Braga2,
Jens Stoye2 &
Fábio V. Martinez1
BMC Bioinformaticsvolume 19, Article number: 152 (2018) | Download Citation
The genomic similarity is a large-scale measure for comparing two given genomes. In this work we study the (NP-hard) problem of computing the genomic similarity under the DCJ model in a setting that does not assume that the genes of the compared genomes are grouped into gene families. This problem is called family-free DCJ similarity.
We propose an exact ILP algorithm to solve the family-free DCJ similarity problem, then we show its APX-hardness and present four combinatorial heuristics with computational experiments comparing their results to the ILP.
We show that the family-free DCJ similarity can be computed in reasonable time, although for larger genomes it is necessary to resort to heuristics. This provides a basis for further studies on the applicability and model refinement of family-free whole genome similarity measures.
A central question in comparative genomics is the elucidation of similarities and differences between genomes. Local and global measures can be employed. A popular set of global measures is based on the number of genome rearrangements necessary to transform one genome into another one [1]. Genome rearrangements are large scale mutations, changing the number of chromosomes and/or the positions and orientations of DNA segments. Examples of such rearrangements are inversions, translocations, fusions, and fissions.
As a first step before such a comparison can be performed, some preprocessing is required. The most common method, adopted for about 20 years [1, 2], is to base the analysis on the order of conserved syntenic DNA segments across different genomes and group homologous segments into families. This setting is said to be family-based. Without duplicate segments, i.e., with the additional restriction that at most one representative of each family occurs in any genome, several polynomial time algorithms have been proposed to compute genomic distances and similarities [3–7]. However, when duplicates are allowed, problems become more intricate and many presented approaches are NP-hard [2, 8–13].
Although family information can be obtained by accessing public databases or by direct computing, data can be incorrect, and inaccurate families could be providing support to erroneous assumptions of homology between segments [14]. Thus, it is not always possible to classify each segment unambiguously into a single family, and an alternative to the family-based setting was proposed recently [15]. It consists of studying genome rearrangements without prior family assignment, by directly accessing the pairwise similarities between DNA segments of the compared genomes. This approach is said to be family-free (FF).
The double cut and join (DCJ) operation, that consists of cutting a genome in two distinct positions and joining the four resultant open ends in a different way, subsumes most large-scale rearrangements that modify genomes [5]. In this work we are interested in the problem of computing the overall similarity of two given genomes in a family-free setting under the DCJ model. This problem is called FFDCJ similarity, and in some contexts it may be more powerful than a distance measure, where it is known that the parsimony assumption holds only for closely related genomes [16], while a well-designed similarity measure may allow more flexibility. As shown in [17], the complexity of computing the FFDCJ similarity was proven to be NP-hard, while the FFDCJ distance was already proven to be APX-hard. In the remainder of this paper, after preliminaries and a formal definition of the FFDCJ similarity problem, we first present an exact ILP algorithm to solve it. We then show the APX-hardness of the FFDCJ similarity problem and present four combinatorial heuristics, with computational experiments comparing their results to the ILP for datasets simulated by a framework for genome evolution.
A preliminary version of this paper appeared in the Proceedings of the 15th RECOMB Satellite Workshop on Comparative Genomics (RECOMB-CG 2017) [18].
Each segment (often called gene) g of a genome is an oriented DNA fragment and its two distinct extremities are called tail and head, denoted by gt and gh, respectively. A genome is composed of a set of chromosomes, each of which can be circular or linear and is a sequence of genes. Each one of the two extremities of a linear chromosome is called a telomere, represented by the symbol ∘. An adjacency in a chromosome is then either the extremity of a gene that is adjacent to a telomere, or a pair of consecutive gene extremities. As an example, observe that the adjacencies 5h, 5t2t, 2h4t, 4h3t, 3h6t, 6h1h and 1t can define a linear chromosome. Another representation of the same linear chromosome, flanked by parentheses for the sake of clarity, would be (∘ −5 2 4 3 6 −1 ∘), in which the genes preceded by the minus sign (−) have reverse orientation.
A double cut and join or DCJ operation applied to a genome A is the operation that cuts two adjacencies of A and joins the separated extremities in a different way, creating two new adjacencies. For example, a DCJ acting on two adjacencies pq and rs would create either the adjacencies pr and qs, or the adjacencies ps and qr (this could correspond to an inversion, a reciprocal translocation between two linear chromosomes, a fusion of two circular chromosomes, or an excision of a circular chromosome). In the same way, a DCJ acting on two adjacencies pq and r would create either pr and q, or p and qr (in this case, the operation could correspond to an inversion, a translocation, or a fusion of a circular and a linear chromosome). For the cases described so far we can notice that for each pair of cuts there are two possibilities of joining. There are two special cases of a DCJ operation, in which there is only one possibility of joining. The first is a DCJ acting on two adjacencies p and q, that would create only one new adjacency pq (that could represent a circularization of one or a fusion of two linear chromosomes). Conversely, a DCJ can act on only one adjacency pq and create the two adjacencies p and q (representing a linearization of a circular or a fission of a linear chromosome).
In the remainder of this section we extend the notation introduced in [17]. In general we consider the comparison of two distinct genomes, that will be denoted by A and B. Respectively, we denote by $\mathcal {A}$ the set of genes in genome A, and by $\mathcal {B}$ the set of genes in genome B.
Adjacency graph and family-based DCJ similarity
In most versions of the family-based setting the two genomes A and B have the same content, that is, $\mathcal {A} = \mathcal {B}$. When in addition there are no duplicates, that is, when there is exactly one representative of each family in each genome, we can easily build the adjacency graph of genomes A and B, denoted by AG(A,B) [6]. It is a bipartite multigraph such that each partition corresponds to the set of adjacencies of one of the two input genomes, and an edge connects the same extremities of genes in both genomes. In other words, there is a one-to-one correspondence between the set of edges in AG(A,B) and the set of gene extremities. Since the graph is bipartite and vertices have degree one or two, the adjacency graph is a collection of paths and even cycles. An example of an adjacency graph is presented in Fig. 1.
The adjacency graph for the genomes $A =\left \{\left (\circ \;{-5}\;2\;4\;3\;6\;{-1}\;\circ \right)\right \}$ and $B =\left \{\left (\circ \;1\;2\;4\;{-3}\;6\;5\;\circ \right)\right \}$
It is well known that a DCJ operation that modifies AG(A,B) by increasing either the number of even cycles by one or the number of odd paths by two decreases the DCJ distance between genomes A and B [6]. This type of DCJ operation is said to be optimal. Conversely, if we are interested in a DCJ similarity measure between A and B, rather than a distance measure, then it should be increased by such an optimal DCJ operation. This suggests that a formula for a DCJ similarity between two genomes should correlate to the number of connected components (in the following just components) of the corresponding adjacency graph.
When the genomes A and B are identical, their corresponding adjacency graph is a collection of c 2-cycles and b 1-paths [6], so that $c + \frac {b}{2} = |\protect \mathcal {A}|=|\protect \mathcal {B}|$. This should be the upper bound of our DCJ similarity measure, and the contribution of each component in the formula should be upper bounded by 1.
We know that an optimal operation can always be applied to adjacencies that belong to one of the two genomes and to one single component of AG(A,B), until the graph becomes a collection of 2-cycles and 1-paths. In other words, each component of the graph can be sorted, that is, converted into a collection of 2-cycles and 1-paths independently of the other components. Furthermore, it is known that each of the following components – an even cycle with 2d+2 edges, or an odd path with 2d+1 edges, or an even path with 2d edges – can be sorted with exactly d optimal DCJ operations. Therefore, for the same d, components with more edges should actually have higher contributions in the DCJ similarity formula.
With all these considerations, the contribution of each component C in the formula is then defined to be its normalized length$\widehat {\ell }(C)$:
$$\begin{array}{*{20}l} \widehat{\ell}(C) =\left\{ \begin{array}{ll} \frac{|C|}{|C|} = 1\:, & \text{if}\; C\; \text{is a cycle}, \\ \frac{|C|}{|C|+1}\:, & \text{if}\; C\; \text{is an odd path}, \\ \frac{|C|}{|C|+2}\:, & \text{if}\; C\; \text{is an even path}. \end{array}\right. \end{array} $$
Let be the set of all components in AG(A,B). The formula for the family-based DCJ similarity is the sum of their normalized lengths:
$$ \mathrm{s}_{\textup{\textsc{dcj}}}(A,B) = \sum_{C \in \cal{C}}\widehat{\ell}(C). $$
Observe that s DCJ (A,B) is a positive value, indeed upper bounded by $|\protect \mathcal {A}|$ (or, equivalently, by $|\protect \mathcal {B}|$). In Fig. 1 the DCJ similarity is $\text {s}_{\text {\textsc {dcj}}}(A, B) = 2\cdot \frac {1}{2} + 3\cdot 1=4$. The formula of Eq. 1 is the family-based version of the family-free DCJ similarity defined in [17], as we will see in the following subsections.
Gene similarity graph
In the family-free setting, each gene in each genome is represented by a unique (signed) symbol, thus $\protect \mathcal {A} \cap \protect \mathcal {B} = \emptyset $ and the cardinalities $|\mathcal {A}|$ and $|\mathcal {{B}}|$ may be distinct. Let a be a gene in A and b be a gene in B, then their normalized gene similarity is given by some value σ(a,b) such that 0≤σ(a,b)≤1.
We can represent the gene similarities between the genes of genome A and the genes of genome B with respect to σ in the so called gene similarity graph [15], denoted by GS σ (A,B). This is a weighted bipartite graph whose partitions $\protect \mathcal {A}$ and $\protect \mathcal {B}$ are the sets of (signed) genes in genomes A and B, respectively. Furthermore, for each pair of genes (a,b) such that $a \in \protect \mathcal {A}$ and $b \in \protect \mathcal {B}$, if σ(a,b)>0 then there is an edge e connecting a and b in GS σ (A,B) whose weight is σ(e):=σ(a,b). An example of a gene similarity graph is given in Fig. 2.
Representation of a gene similarity graph GS σ (A,B) for two unichromosomal linear genomes $A =\left \{\left (\circ \;1\;2\;3\;4\;5\;6\;\circ \right)\right \}$ and $B =\left \{\left (\circ \;7\;8\;{-9}\;{-10}\;11\;{-12}\;{-13}\;14\;\circ \right)\right \}$
Weighted adjacency graph
The weighted adjacency graph AG σ (A,B) of two genomes A and B has a vertex for each adjacency in A and a vertex for each adjacency in B. For a gene a in A and a gene b in B with gene similarity σ(a,b)>0 there is one edge eh connecting the vertices containing the two heads ah and bh and one edge et connecting the vertices containing the two tails at and bt. The weight of each of these edges is $\sigma \left (e^{h}\right) = \sigma \left (e^{t}\right) = \sigma (a,b)$. Differently from the simple adjacency graph, the weighted adjacency graph cannot be easily decomposed into cycles and paths, since its vertices can have degree greater than 2. As an example, the weighted adjacency graph corresponding to the gene similarity graph of Fig. 2 is given in Fig. 3.
The weighted adjacency graph AG σ (A,B) for two unichromosomal linear genomes $A =\left \{\left (\circ \;1\;2\;3\;4\;5\;6\;\circ \right)\right \}$ and $B =\left \{\left (\circ \;7\;8\;{-9}\;{-10}\;11\;{-12}\;{-13}\;14\;\circ \right)\right \}$
We denote by w(G) the weight of a graph or subgraph G, that is given by the sum of the weights of all its edges, that is, $w(G) = \sum _{e \in G} \sigma (e)$. Observe that, for each edge e∈GS σ (A,B), we have two edges of weight σ(e) in AG σ (A,B), thus, the total weight of the weighted adjacency graph is $w\left (AG_{\sigma }(A,B)\right) = 2\,w\left (GS_{\sigma }(A,B)\right)$.
Reduced genomes
Let A and B be two genomes and let GS σ (A,B) be their gene similarity graph. Now let M={e1,e2,…,e n } be a matching in GS σ (A,B) and denote by $w(M) = \sum _{e_{i} \in M} \sigma (e_{i})$ the weight of M, that is the sum of its edge weights. Since the endpoints of each edge e i =(a,b) in M are not saturated by any other edge of M, we can unambiguously define the function ℓM(a)=ℓM(b)=i to relabel each vertex in A and B [17]. The reduced genome AM is obtained by deleting from A all genes not saturated by M, and renaming each saturated gene a to ℓM(a), preserving its orientation (sign). Similarly, the reduced genome BM is obtained by deleting from B all genes that are not saturated by M, and renaming each saturated gene b to ℓM(b), preserving its orientation. Observe that the set of genes in AM and in BM is $\protect \mathcal {G}(M) = \left \{ \ell ^{M}(g) : g \text { is saturated by the matching} ~M \right \} = \{1,2,\ldots,n\}$.
Weighted adjacency graph of reduced genomes
Let AM and BM be the reduced genomes for a given matching M of GS σ (A,B). The weighted adjacency graph $AG_{\sigma }\left (A^{M},B^{M}\right)$ can be obtained from AG σ (A,B) by deleting all edges that are not elements of M and relabeling the adjacencies according to ℓM. Vertices that have no connections are then also deleted from the graph. Another way to obtain the same graph is building the adjacency graph of AM and BM and adding weights to the edges as follows. For each gene i in $\protect \mathcal {G}(M)$, both edges itit and ihih inherit the weight of edge e i in M, that is, $\sigma \left (i^{t}i^{t}\right) = \sigma \left (i^{h}i^{h}\right) = \sigma (e_{i})$. Consequently, the graph $AG_{\sigma }\left (A^{M},B^{M}\right)$ is also a collection of paths and even cycles and differs from $AG\left (A^{M}, B^{M}\right)$ only by the edge weights.
For each edge e∈M, we have two edges of weight σ(e) in $AG_{\sigma }\left (A^{M},B^{M}\right)$, therefore $w\left (AG_{\sigma }\left (A^{M}, B^{M}\right)\right) = 2\,w(M)$. Examples of weighted adjacency graphs of reduced genomes are shown in Fig. 4.
Considering, as in Fig. 2, the genomes $A =\left \{\left (\circ \;1\;2\;3\;4\;5\;6\;\circ \right)\right \}$ and $B =\left \{\left (\circ \;7\;8\;{-9}\;{-10}\;11\;{-12}\;{-13}\;14\;\circ \right)\right \}$, let M1 (dashed edges) and M2 (dotted edges) be two distinct maximal matchings in GS σ (A,B), shown in the upper part. The two resulting weighted adjacency graphs $AG_{\sigma }\left (A^{M_{1}},B^{M_{1}}\right)$, that has two cycles and two even paths, and $AG_{\sigma }\left (A^{M_{2}},B^{M_{2}}\right)$, that has two odd paths, are shown in the lower part
The family-free DCJ similarity
For a given matching M in GS σ (A,B), a first formula for the weighted DCJ (wDCJ) similarity s σ of the reduced genomes AM and BM was proposed in [15] only considering the cycles of $AG_{\sigma }\left (A^{M},B^{M}\right)$. After that, this definition was modified and extended in [17], in order to consider all components of the weighted adjacency graph.
First, let the normalized weight$\widehat {w}(C)$ of a component C of $AG_{\sigma }\left (A^{M},B^{M}\right)$ be:
$$\begin{array}{*{20}l} \widehat{w}(C) =\left\{ \begin{array}{ll} \frac{w(C)}{|C|}\:, & \text{if}\; C\; \text{is a cycle}\:, \\ \frac{w(C)}{|C|+1}\:, & \text{if}\; C\; \text{is an odd path}\:, \\ \frac{w(C)}{|C|+2}\:, & \text{if}\; C\; \text{is an even path}\:. \end{array}\right. \end{array} $$
Let be the set of all components in $AG_{\sigma }\left (A^{M},B^{M}\right)$. Then the wDCJ similarity s σ is given by the following formula [17]:
$$ s_{\sigma}\left(A^{M},B^{M}\right) = \sum_{C \in \cal{C}}\widehat{w}(C)\:. $$
Observe that, when the weights of all edges in M are equal to 1, this formula is equivalent to the one in Eq. 1.
The goal now is to compute the family-free DCJ similarity, i.e., to find a matching in GS σ (A,B) that maximizes s σ . However, although $s_{\sigma }\left (A^{M},B^{M}\right)$ is a positive value upper bounded by |M|, the behaviour of the wDCJ similarity does not correlate with the size of the matching, since smaller matchings, that possibly discard gene assignments, can lead to higher wDCJ similarities [17]. For this reason, the wDCJ similarity function is restricted to maximal matchings only, ensuring that no pair of genes with positive gene similarity score is simply discarded, even though it might decrease the overall wDCJ similarity. We then have the following optimization problem:
Problem FFDCJ-SIMILARITY(A,B): Given genomes A and B and their gene similarities σ, calculate their family-free DCJ similarity
$$ \textup{s}_{\textup{\textsc{ffdcj}}}(A, B) = \max_{M \in \mathbb{M}}\left\{ s_{\sigma}\left(A^{M},B^{M}\right) \right\}, $$
where $\mathbb {M}$ is the set of all maximal matchings in GS σ (A,B).
Problem FFDCJ-SIMILARITY is NP-hard [17]. Moreover, one can directly correlate the problem to the adjacency similarity problem, where the goal is to maximize the number of preserved adjacencies between two given genomes [11, 19]. However, since there the objective is to maximize the number of cycles of length 2, even an approximation for the adjacency similarity problem is not a good algorithm for the FFDCJ-SIMILARITY problem, where cycles of higher lengths are possible in the solution [20].
Capping telomeres
A very useful preprocessing to AG σ (A,B) is the capping of telomeres, a general technique for simplifying algorithms that handle genomes with linear chromosomes, commonly used in the context of family-based settings [4, 5, 21]. Given two genomes A and B with i and j linear chromosomes, respectively, for each vertex representing only one extremity we add a null extremity τ to it (e.g., 1t of Fig. 4 becomes τ1t). Furthermore, in order to add the same number of null extremities to both genomes, |j−i|null adjacencies ττ (composed of two null extremities) are added to genome A, if i<j, or to genome B, if j<i. Finally, for each null extremity of a vertex in A we add to AG σ (A,B) a null edge with weight 0 to each null extremity of vertices in B. Consequently, after capping of telomeres the graph AG σ (A,B) has no vertex of degree one. Notice that, if before the capping p was a path of weight w connecting telomeres in AG σ (A,B), then after the capping p will be part of a cycle closed by null extremities with normalized weight $\frac {w}{|p|+1}$ if p is an odd path, or of normalized weight $\frac {w}{|p|+2}$ if p is an even path. In any of the two cases, the normalized weight is consistent with the wDCJ similarity formula in Eq. 2.
An exact Algorithm
In order to exactly compute the family-free DCJ similarity between two given genomes, we propose an integer linear program (ILP) formulation that is similar to the one for the family-free DCJ distance given in [17]. It adopts the same notation and also uses an approach to solve the maximum cycle decomposition problem as in [13].
Let A and B be two genomes, let G=GS σ (A,B) be their gene similarity graph, and let X A and X B be the extremity sets (including null extremities) with respect to A and B for the capped adjacency graph AG σ (A,B), respectively. The weight w(e) of an edge e in G is also denoted by w e . For the ILP formulation, an extension H=(V H ,E H ) of the capped weighted adjacency graph AG σ (A,B) is defined such that $V_{H} = X_{A} \cup X_{B}$, and $E_{H} = E_{m} \cup E_{a} \cup E_{s}$ has three types of edges: (i) matching edges that connect two extremities in different extremity sets, one in X A and the other in X B , if they are null extremities or there exists an edge connecting these genes in G; the set of matching edges is denoted by E m ; (ii) adjacency edges that connect two extremities in the same extremity set if they form an adjacency; the set of adjacency edges is denoted by E a ; and (iii) self edges that connect two extremities of the same gene in an extremity set; the set of self edges is denoted by E s . Matching edges have weights defined by the normalized gene similarity σ, all adjacency and self edges have weight 0. Notice that any edge in G corresponds to two matching edges in H.
The description of the ILP follows. For each edge e in H, we create a binary variable x e to indicate whether e will be in the final solution. We require first that each adjacency edge be chosen:
$$x_{e} = 1, \qquad \forall~e \in E_{a}. $$
Now we rename each vertex in H such that V H ={v1,v2,…,v k } with k=|V H |. We require that each of these vertices be adjacent to exactly one matching or self edge:
$$\begin{aligned} \sum_{e = v_{r}v_{t} \in E_{m} \cup E_{s}} x_{e} = 1,& \forall~v_{r} \in X_{A}, \quad \text{and}\\ \sum_{e = v_{r}v_{t} \in E_{m} \cup E_{s}} x_{e} = 1,& \forall~v_{t} \in X_{B}. \end{aligned} $$
Then, we require that the final solution be valid, meaning that if one extremity of a gene in A is assigned to an extremity of a gene in B, then the other extremities of these two genes have to be assigned as well:
$$x_{a^{h}b^{h}} = x_{a^{t}b^{t}}, \qquad \forall~ab \in E_{G}. $$
We also require that the matching be maximal. This can easily be ensured if we guarantee that at least one of the vertices connected by an edge in the gene similarity graph be chosen, which is equivalent to not allowing both of the corresponding self edges in the weighted adjacency graph be chosen:
$$x_{a^{h}a^{t}} + x_{b^{h}b^{t}} \leq 1, \qquad \forall~ab \in E_{G}. $$
To count the number of cycles, we use the same strategy as described in [13]. For each vertex v i we define a variable y i that labels v i such that
$$0 \leq y_{i} \leq i, \qquad 1 \leq i \leq k. $$
We also require that adjacent vertices have the same label, forcing all vertices in the same cycle to have the same label:
$$\begin{array}{*{20}l} y_{i} \leq y_{j} + i \cdot (1 - x_{e}), & \qquad \forall~e = v_{i}v_{j} \in E_{H}, \\ y_{j} \leq y_{i} + j \cdot (1 - x_{e}), &\qquad \forall~e = v_{i}v_{j} \in E_{H}. \end{array} $$
We create a binary variable z i , for each vertex v i , to verify whether y i is equal to its upper bound i:
$$i\cdot z_{i} \leq y_{i}, \qquad 1 \leq i \leq k. $$
Since all variables y i in the same cycle have the same label but a different upper bound, only one of the y i can be equal to its upper bound i. This means that z i is 1 if the cycle with vertex i as representative is used in a solution.
Now, let L={2j:j=1,…,n} be the set of possible cycle lengths in H, where $n := \min (|A|, |B|)$. We create the binary variable x ei to indicate whether e is in i, for each e∈E H and each cycle i. We also create the binary variable $x_{ei}^{\ell }$ to indicate whether e belongs to i and the length of cycle i is ℓ, for each e∈E H , each cycle i, and each ℓ∈L.
We require that if an edge e belongs to a cycle i, then it can be true for only one length ℓ∈L. Thus,
$$ \sum_{\ell \in L} x_{ei}^{\ell} \leq x_{ei}, \qquad \forall~e \in E_{H}\ \text{and}\ 1 \leq i \leq k. $$
We create another binary variable $z_{i}^{\ell }$ to indicate whether cycle i has length ℓ. Then $\ell \cdot z_{i}^{\ell }$ is an upper bound for the total number of edges in cycle i of length ℓ:
$$\sum_{e \in E_{M}} x_{ei}^{\ell} \leq \ell \cdot z_{i}^{\ell}, \qquad \forall~\ell \in L\ \text{and}\ 1 \leq i \leq k. $$
The length of a cycle i is given by $\ell \cdot z_{i}^{\ell }$, for i=1,…,k and ℓ∈L. On the other hand, it is the total amount of matching edges e in cycle i. That is,
$$\sum_{\ell \in L} \ell \cdot z_{i}^{\ell} = \sum_{e \in E_{m}} x_{ei}, \qquad 1 \leq i \leq k. $$
We have to ensure that each cycle i must have just one length:
$$\sum_{\ell \in L} z_{i}^{\ell} = z_{i}, \qquad 1 \leq i \leq k. $$
Now we create the binary variable y ri to indicate whether the vertex v r is in cycle i. Thus, if x ei =1, i.e., if the edge e=v r v t in H is chosen in cycle i, then y ri =1=y ti (and x e =1 as well). Hence,
$$ \begin{aligned} \left. \begin{array}{rcl} x_{ei} & \leq & x_{e}, \\ x_{ei} & \leq & y_{ri}, \\ x_{ei} & \leq & y_{ti}, \\ x_{ei} & \geq & x_{e} + y_{ri} + y_{ti} - 2, \end{array} \right\} \quad \forall~e = v_{r}v_{t} \in E_{H} \text{ and}\ 1 \leq i \leq k. \end{aligned} $$
Since y r is an integer variable, we associate y r to the corresponding binary variable y ri , for any vertex v r belonging to cycle i:
$$y_{r} = \sum_{i = 1}^{r} i \cdot y_{ri}, \qquad \forall~v_{r} \in V_{H}. $$
Furthermore, we must ensure that each vertex v r may belong to at most one cycle:
$$\sum_{i = 1}^{r} y_{ri} \leq 1, \qquad \forall~v_{r} \in V_{H}. $$
Finally, we set the objective function as follows:
$$\text{maximize} \quad \sum_{i = 1}^{k} \sum_{\ell \in L} \sum_{e \in E_{m}} \frac{w_{e}x_{ei}^{\ell}}{\ell}. $$
Note that, with this formulation, we do not have any path as a component. Therefore, the objective function above is exactly the family-free DCJ similarity SFFDCJ(A,B) as defined in Eqs. (2) and (3).
Notice that the ILP formulation has O(N4) variables and $O\left (N^{3}\right)$ constraints, where N=|A|+|B|. The number of variables is proportional to the number of variables $x_{ei}^{\ell }$, and the number of constraints is upper bounded by (4) and (5).
APX-hardness and heuristics
In this section we first state that problem FFDCJ-SIMILARITY is APX-hard and provide a lower bound for the approximation ratio.
Theorem 1
FFDCJ-SIMILARITY is APX-hard and cannot be approximated with approximation ratio better than 22/21=1.0476…, unless P = NP.
See Additional file 1. □
We now propose four heuristic algorithms to compute the family-free DCJ similarity of two given genomes: one that is directly derived from a maximum matching of the gene similarity graph GS σ and three greedy-like heuristics that, according to different criteria, select cycles from the weighted adjacency graph AG σ , such that the cycles selected by each heuristic induce a matching in GS σ .
Maximum matching
In the first heuristic, shown in Algorithm 1 (MAXIMUM-MATCHING), we find a maximum weighted bipartite matching M in GS σ by the Hungarian Method, also known as Kuhn-Munkres Algorithm [22–24]. Given the matching M, it is straightforward to obtain the reduced genomes AM and BM and return the similarity value $s_{\sigma }\left (A^{M},B^{M}\right)$.
For the implementantion of this heuristic we cast similarity values (floating point edge weights in [0,1]) in GS σ (A,B) to integers by multiplying them by some power of ten, depending on the precision of similarity values. Given real or general simulated instances, and for a power of ten large enough, this operation has little impact on the optimality of the weighted matching M for the original weights in GS σ (A,B) obtained from the Kuhn-Munkres algorithm, i.e., the weight of M for the original weights in GS σ (A,B) is optimal or near-optimal since only less significant digits are not considered.
Greedy heuristics
Before describing the greedy heuristics, we need to introduce the following concepts. We say that two edges in AG σ (A,B) are consistent if one connects the head and the other connects the tail of the same pair of genes, or if they connect extremities of distinct genes in both genomes. Otherwise they are inconsistent. A set of edges, in particular a cycle, is consistent if it has no pair of inconsistent edges. A set of cycles is consistent if the union of all of their edges is consistent. Observe that a consistent set of cycles in AG σ (A,B) induces a matching in GS σ (A,B).
Each one of the three greedy algorithms selects disjoint and consistent cycles in the capped AG σ (A,B). The consistent cycles are selected from the set of all cycles of AG σ (A,B), that is obtained in Step 4 of each heuristic (see Algorithms 2, 3 and 4 below), using a cycle enumeration algorithm by Hawick and James [25], which is based on Johnson's algorithm [26]. For this reason, the running time of our heuristics is potentially exponential in the number of vertices of AG σ (A,B).
In the three heuristics, after completing the cycle selection by iterating over the set of all cycles of AG σ (A,B), the induced matching M in GS σ (A,B) could still be non-maximal. Whenever this occurs, among the genes that are unsaturated by M, we can identify disposable genes by one of the two following conditions:
Any unsaturated gene in GS σ (A,B) that is connected only to saturated genes, is a disposable gene;
For a given set of vertices $S \subseteq \protect \mathcal {A}$ (or $S \subseteq \protect \mathcal {B}$) in GS σ (A,B) such that, for the set of connected genes N(S), we have |S|>|N(S)| (Hall's theorem), then any subset of size |S|−|N(S)| of unsaturated genes of S can be set as disposable genes. In our implementation we choose those |S|−|N(S)| unsaturated genes with the smallest labels. Such $S \subseteq \protect \mathcal {A}$ can be found as follows. Let v be the set of vertices saturated by M, and let M′ be a maximum cardinality matching in GS σ (A,B)∖v. Consider the sets $\protect \mathcal {A}' = \protect \mathcal {A} \setminus v$ and $\protect \mathcal {B}' = \protect \mathcal {B} \setminus v$. Now let GSσ′(A,B) be a directed bipartite graph on the vertex set $\protect \mathcal {A}' \cup \protect \mathcal {B}'$, which includes the edges of M′ oriented from $\protect \mathcal {B}'$ to $\protect \mathcal {A}'$ and the remaining edges of GS σ (A,B)∖v oriented from $\protect \mathcal {A}'$ to $\protect \mathcal {B}'$, and let $U \subseteq \protect \mathcal {A}'$ be the set of vertices of $\protect \mathcal {A}'$ unsaturated by M′. $S \subseteq \protect \mathcal {A}$ is the corresponding set of vertices reachable from U in GSσ′(A,B), if any. $S \subseteq \protect \mathcal {B}$ can be found analogously.
If there is no consistent cycle to be selected and the matching M is still non-maximal, new consistent cycles appear in AG σ (A,B) after the deletion of all identified disposable genes (see Fig. 5). In order to delete a disposable gene g, we need to remove from AG σ (A,B) the edges corresponding to extremities gt or gh and "merge" the two vertices that represent these extremities. Every time disposable genes are deleted from AG σ (A,B), a new iteration of the algorithms starts from Step 4 (see again Algorithms 2, 3 and 4). This procedure assures that, in each one of the three algorithms, the final set of selected cycles defines a maximal matching M, such that $AG_{\sigma }\left (A^{M},B^{M}\right)$ is exactly the union of those selected cycles.
Consider genomes $A =\left \{\left (\circ \;1\;2\;3\;\circ \right)\right \}$ and $B =\left \{\left (\circ \;{-4}\;5\;6\;{-7}\;\circ \right)\right \}$ and their gene similarity graph GS σ (A,B). The selection of the dashed cycle in AG σ (A,B) adds to the matching M in GS σ (A,B) the edges connecting gene 1 to gene 4 and gene 2 to gene 5. After this selection, although the matching M is not yet maximal, there are no more consistent cycles in AG σ (A,B). Observe that in GS σ (A,B) gene 6 is unsaturated and its single neighbor - gene 2 - is already saturated. Since gene 6 can no longer be saturated by M, it is a disposable gene and is deleted from AG σ (A,B), resulting in AGσ′(A,B), where a new consistent cycle appears. The selection of this new cycle adds to the matching M the edge connecting gene 3 to gene 7. Both AG σ (A,B) and AGσ′(A,B) have a simplified representation, in which the edge weights, as well as two of the four null edges of the capping, are omitted. Furthermore, for the sake of clarity, in this simplified representation each edge has a label describing the extremities connected by it
Best density
The best density heuristic is shown in Algorithm 2 (GREEDY-DENSITY). The density of a cycle C is given by $\frac {w(C)}{|C|^{2}}$ (its weight divided by the square of its length). The cycles of AG σ (A,B) are arranged in decreasing order of their densities, and consistent cycles are selected following this order.
Since the number of cycles of any length may be exponential in the size of the input graph, in our implementation we add a heuristic in which initially the search is restricted to cycles of length up to ten. Then, as long as the obtained matching is not maximal, Steps 4 to 7 are repeated, while gradually increasing the allowed maximum cycle length in steps of ten.
Best length
The best length heuristic is shown in Algorithm 3 (GREEDY-LENGTH). The cycles of AG σ (A,B) are found in increasing order of their lengths, and ties are broken by the decreasing order of their weights. Here we first find and select cycles of length 2, then of length 4, and so on, for each fixed length iterating over the set of all cycles in decreasing order of their weights. Consistent cycles are selected following this procedure.
Best length with weighted maximum independent set
The best length heuristic with WMIS is shown in Algorithm 4 (GREEDY-WMIS) and is a variation of GREEDY-LENGTH. Instead of selecting cycles of greater weights for a fixed length, this algorithm selects the greatest amount of cycles for a fixed length by a WMIS algorithm. The heuristic builds a cycle graph where each vertex is a cycle of AG σ (A,B), the weight of a vertex is the weight of the cycle it represents and two vertices are adjacent if the cycles they represent are inconsistent. The heuristic tries to find next an independent set with the greatest weight in the cycle graph. Since this graph is not d-claw-free for any fixed d, the WMIS algorithm [27] does not guarantee any fixed ratio.
Experiments for the ILP and our heuristics were conducted on an Intel i7-4770 3.40GHz machine with 16 GB of memory. In order to do so, we produced simulated datasets by the Artificial Life Simulator (ALF) [28] and obtained real genome data from NCBI, using the FFGC tool [29] to obtain similarity scores between genomes. Gurobi Optimizer 7.0 was set to solve ILP instances with default parameters, time limit of 1800 s and 4 threads, and the heuristics were implemented in C++.
Simulated data
We generated datasets with 10 genome samples each, running pairwise comparisons between all genomes in the same dataset. Each dataset has genomes of sizes around 25, 50 or 1000 (the latter used only for running the heuristics), generated based on a sample from the tree of life with 10 leaf species and PAM distance of 100 from the root to the deepest leaf. Gamma distribution with parameters k=3 and θ=133 was used for gene length distribution. For amino acid evolution we used the WAG substitution model with default parameters and the preset of Zipfian indels with rate 0.00005. Regarding genome level events, we allowed gene duplications and gene losses with rate 0.002, and reversals and transpositions (which ALF refers to as translocations) with rate 0.0025, with at most 3 genes involved in each event. To test different proportions of genome level events, we also generated simulated datasets with 2- and 5-fold increase for reversal and transpositions rates.
Results are summarized in Table 1. Each dataset is composed of 10 genomes, totaling 45 comparisons of pairs per dataset. Rate r=1 means the default parameter set for genome level events, while r=2 and r=5 mean the 2- and 5-fold increase of rates, respectively. For the ILP the table shows the average time for instances for which an optimal solution was found, the number of instances for which the optimizer did not find an optimal solution within the given time limit and, for the latter class of instances, the average relative gap between the best solution found and the upper bound found by the solver, calculated by $\left (\frac {\text {upper bound}}{\text {best solution}} - 1\right) \times 100$. For our heuristics, the running time for all instances of sizes 25 and 50 was negligible, therefore the table shows only the average relative gap between the solution found and the upper bound given by the ILP solver (if any).
Table 1 Results of experiments for simulated genomes
Results clearly show the average relative gap of heuristics increases proportionally to the rate of reversals and transpositions. This is expected, as higher mutation rates often result in higher normalized weights on longer cycles, thus the association of genes with greater gene similarity scores will be subject to the selection of longer cycles. Interestingly, for some larger instances the relative gap for heuristics is very close to the values obtained by the ILP solver, suggesting the use of heuristics may be a good alternative for some classes of instances or could help the solver finding lower bounds quickly. It is worth noting that the GREEDY-DENSITY heuristic found solutions with gap smaller than 1% for 38% of the instances with 25 genes.
In a single instance (25 genes, r=2), the gap between the best solution found and the upper bound was much higher for the ILP solver and for the heuristics. This instance in particular is precisely the one with the largest number of edges in GS σ (A,B) in the dataset. This may indicate that a moderate increase in degree of vertices (1.3 on average to 1.8 in this case) may result in much harder instances for the solver and the heuristics, as after half of the time limit the solver attained no significant improvement on solutions found, and the heuristics returned solutions with a gap even higher.
We also simulated 10 genomes of sizes around 50, with PAM distance of 15 from the root to the deepest leaf, therefore evolutionarily "closer" to each other and for which higher similarity values are expected. For these genomes the default rates were multiplied by ten (10-fold) for Zipfian indels, gene duplications, gene losses, reversals and transpositions, otherwise there would be no significative difference between them. The exact ILP algorithm found an optimal solution for only 4 of the 45 instances, taking 840.59 s on average. For the remaining instances, where the ILP did not finish within the time limit, the average gap is 329.53%. Regarding the heuristics (Table 2), that all run in negligible time, GREEDY-DENSITY outperforms the others, with an average gap of 163% compared to the best upper bound found by the ILP solver. Surprisingly, values returned by greedy heuristics are better than values obtained by the ILP for these instances. Results again suggest that the ILP could benefit greatly from heuristics by using their results as initial lower bounds. Moreover, for some groups of instances even heuristics alone can obtain excellent results.
Table 2 Results of experiments for 10 simulated genomes (45 pairwise comparisons) with smaller PAM distance
Although we have no upper bounds for comparing the results of our heuristics for genome sizes around 1000, they are still very fast. For these genomes we analyze the MAXIMUM-MATCHING algorithm separately afterwards, taking into account for now only the other three heuristics. The average running times are 0.30 s, 15.11 s and 12.16 s for GREEDY-DENSITY, GREEDY-LENGTH and GREEDY-WMIS, respectively, showing nevertheless little difference on results.
However, in 25% of the instances with r=5, the difference from the best to the worst solutions provided by these heuristics varied between 10% and 24%, the best of which were given by GREEDY-DENSITY. That is probably because, instead of prioritizing shorter cycles, GREEDY-DENSITY attempts to balance both normalized weight and length of the selected cycles. The average running times for the instances with r=5 are 1.84 s, 76.02 s and 80.67 s for GREEDY-DENSITY, GREEDY-LENGTH and GREEDY-WMIS, respectively.
Still for genomes of size around 1000 and r=5, the MAXIMUM-MATCHING heuristic is the fastest, with an average running time of 1.70 s. Despite being the best heuristic for a few cases, the similarity value given by this heuristic is merely 27% of the value given by the best heuristic, on average. While the MAXIMUM-MATCHING heuristic is clearly not useful for calculating similarity values, these results show how significant it is choose cycles with the best normalized weights versus prioritizing edges with best weights in the gene similarity graph for the FFDCJ-SIMILARITY problem. Since this property of the MAXIMUM-MATCHING somehow reflects the strategy of family-based comparative genomics, this observation indicates an advantage of family-free analysis compared to family-based analysis.
To better understand how cycles scale, we generated 5-fold larger instances (up to 10000 genes), running the GREEDY-DENSITY heuristic. Results show that most of the cycles found are of short lengths compared to the genome sizes and in practice their number does not increase exponentially, providing some insight on why our heuristics are fast.
Finally, as expected, experiments for genomes simulated with different parameters indicate the FFDCJ similarity decreases as the PAM distance or the rates of genome level events increases (data not shown).
Real genome data
To show the applicability of our methods to real data, we obtained from NCBI protein-coding genes of X chromosomes of human (Homo-sapiens, assembly GRCh38.p7), house mouse (Mus musculus, assembly GRCm38.p4 C57BL/6J), and Norway rat (Rattus norvegicus, assembly Rnor_6.0). In mammals, the set of genes on the X chromosome has been reasonably conserved throughout the last several million years [30], having however their order disrupted many times.
Since protein sequences are used to obtain the similarity scores (with the help of the BLASTp tool) instead of nucleotide sequences, 76 genes from the rat genome were excluded because no protein sequence was available. Besides, when a gene has multiple isoforms, the longest is kept. The number of genes in the resulting genomes were 822, 953 and 863 for human, mouse and rat, respectively, some of them removed from the pairwise genome comparison due to the pruning process of FFGC.
Table 3 shows, as expected, that the two rodent X chromosomes have a higher similarity than any of them to the human X chromosome. The values returned by the greedy heuristics are very similar, where GREEDY-LENGTH is the fastest. MAXIMUM-MATCHING results are less than 5% distant from the results of the greedy heuristics, which indicates the choice of cycles has some influence but does not dominate the similarity values obtained for these instances. Matching sizes are similar for all heuristics, showing that about 8% of the genes of the smaller genomes could not be matched to some gene of the other genome and had to be removed, that is, they are disposable genes.
Table 3 Results for heuristics on real genomes
In this paper we developed methods for computing the (NP-hard) family-free DCJ similarity, which is a large-scale rearrangement measure for comparing two given genomes. We presented an exact algorithm in form of an integer linear program and extended our previous hardness result by showing that the problem is APX-hard and has a lower bound of 22/21 for its approximation ratio. Therefore, we developed four heuristic algorithms and could show that they perform well while having reasonable running times also for realistic-size genomes.
Our initial experiment on real data can be considered a proof of concept. In general, the computational results of this paper can be used to more systematically study the applicability of the DCJ similarity measure in various contexts. One important point to be investigated is whether, differently from parsimonious distance measures that usually only hold for closely related genomes, a genomic similarity would allow to perform good comparisons of more distant genomes as well. Fine-tuning of both the data preparation and objective function may be necessary, though.
For example, one drawback of the function s FFDCJ as defined in Eq. 3 is that distinct pairs of genomes might give family-free DCJ similarity values that cannot be compared easily, because the value of SFFDCJ varies between 0 and |M|, where M is the matching giving rise to SFFDCJ. Therefore some kind of normalization would be desirable. A simple approach could be to divide SFFDCJ by the size of the smaller genome, because this is a trivial upper bound for |M|. Moreover, it can be applied as a simple postprocessing step, keeping all theoretical results of this paper valid. A better normalization, however, might be to divide by |M| itself. An analytical treatment here seems more difficult, though. Therefore we leave this and the application to multiple genomes in a phylogenetic context as an open problem for future work.
Other questions that can be studied in the future are the relationships between family-based and family-free genomic similarity measures in general.
Sankoff D. Edit distance for genome comparison based on non-local operations. In: Proc. of CPM 1992 LNCS, vol. 644.1992. p. 121–35.
Sankoff D. Genome rearrangement with gene families. Bioinformatics. 1999; 15(11):909–17.
Bafna V, Pevzner P. Genome rearrangements and sorting by reversals. In: Proc. of FOCS 1993. Palo Alto: IEEE: 1993. p. 148–57.
Hannenhalli S, Pevzner P. Transforming men into mice (polynomial algorithm for genomic distance problem). In: Proc. of FOCS 1995. Milwaukee: IEEE: 1995. p. 581–92.
Yancopoulos S, Attie O, Friedberg R. Efficient sorting of genomic permutations by translocation, inversion and block interchanges. Bioinformatics. 2005; 21(16):3340–6.
Bergeron A, Mixtacki J, Stoye J. A unifying view of genome rearrangements In: Bucher P, Moret BME, editors. Proc. of WABI 2006. LNBI, vol. 4175. Zurich: Springer: 2006. p. 163–73.
Braga MDV, Willing E, Stoye J. Double cut and join with insertions and deletions. J Comput Biol. 2011; 18(9):1167–84.
Bryant D. The complexity of calculating exemplar distances In: Sankoff D, Nadeau JH, editors. Comparative Genomics. Dortrecht: Kluwer Academic Publishers: 2000. p. 207–11.
Bulteau L, Jiang M. Inapproximability of (1,2)-exemplar distance. IEEE/ACM Trans. Comput. Biol. Bioinf. 2013; 10(6):1384–90.
Angibaud S, Fertin G, Rusu I, Vialette S. A pseudo-boolean framework for computing rearrangement distances between genomes with duplicates. J Comput Biol. 2007; 14(4):379–93.
Angibaud S, Fertin G, Rusu I, Thévenin A, Vialette S. Efficient tools for computing the number of breakpoints and the number of adjacencies between two genomes with duplicate genes. J Comput Biol. 2008; 15(8):1093–115.
Angibaud S, Fertin G, Rusu I, Thévenin A, Vialette S. On the approximability of comparing genomes with duplicates. Journal of Graph Algorithms and Applications. 2009; 13(1):19–53.
Shao M, Lin Y, Moret B. An exact algorithm to compute the DCJ distance for genomes with duplicate genes. In: Proc. of RECOMB 2014. LNBI. Pittsburg: Springer: 2014. p. 280–292.
Doerr D, Thévenin A, Stoye J. Gene family assignment-free comparative genomics. BMC Bioinformatics. 2012; 13(Suppl 19):3.
Braga MDV, Chauve C, Doerr D, Jahn K, Stoye J, Thévenin A, Wittler R. The potential of family-free genome comparison In: Chauve C, El-Mabrouk N, Tannier E, editors. Models and Algorithms for Genome Evolution. London: Springer: 2013. p. 287–307. Chap. 13.
Durrett R, Nielsen R, York TL. Bayesian estimation of genomic distance. Genetics. 2004; 166(1):621–9.
Martinez FV, Feijão P, Braga MDV, Stoye J. On the family-free DCJ distance and similarity. Algoritm Mol Biol. 2015; 10:13.
Rubert DP, Medeiros GL, Hoshino EA, Braga MDV, Stoye J, Martinez FV. Algorithms for computing the family-free genomic similarity under DCJ. In: Proc. of RECOMB-CG 2017. LNBI. Barcelona: Springer International Publishing: 2017. p. 76–100.
Chen Z, Fu B, Xu J, Yang B, Zhao Z, Zhu B. Non-breaking similarity of genomes with gene repetitions. In: Proc. of Combinatorial Pattern Matching (CPM 2007). Heidelberg: Springer: 2007. p. 137–43.
Rubert DP, Feijão P, Braga MDV, Stoye J, Martinez FV. Approximating the DCJ distance of balanced genomes in linear time. Algoritm Mol Biol. 2017; 12:3.
Shao M, Lin Y. Approximating the edit distance for genomes with duplicate genes under DCJ, insertion and deletion. BMC Bioinformatics. 2012; 13(Suppl 19):13.
Munkres J. Algorithms for the assignment and transportation problems. J SIAM. 1957; 5(1):32–28.
Edmonds J, Karp RM. Theoretical improvements in algorithmic efficiency for network flow problems. J ACM. 1972; 19(2):248–64.
Tomizawa N. On some techniques useful for solution of transportation network problems. Networks. 1971; 1(2):173–94.
Hawick KA, James HA. Enumerating circuits and loops in graphs with self-arcs and multiple-arcs, Technical Report CSTN-013: Massey University; 2008.
Johnson D. Finding all the elementary circuits of a directed graph. SIAM J Comput. 1975; 4(1):77–84.
Berman P. A d/2 approximation for maximum weight independent set in d-claw free graphs In: Halldórsson MM, editor. Proc. of SWAT 2000. Bergen: Springer-Verlag Berlin Heidelberg: 2000. p. 214–9.
Dalquen DA, Anisimova M, Gonnet GH, Dessimoz C. Alf – a simulation framework for genome evolution. Mol Biol Evol. 2012; 29(4):1115.
Doerr D. Family Free Genome Comparison (FFGC). 2017. https://bibiserv2.cebitec.uni-bielefeld.de/ffgc. Accessed 31 Jan 2018.
Ohno S. Sex Chromosomes and Sex-linked Genes. Endocrinology, vol. 1. Berlin, Heidelberg: Springer; 2013.
We would like to thank Pedro Feijão and Daniel Doerr for helping us with hints on how to get simulated and real data for our experiments.
The publication cost of this article was funded by the author's home institutions.
Source code of the algorithms implementation is available from https://git.facom.ufms.br/diego/ffdcj-sim.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 19 Supplement 6, 2018: Proceedings of the 15th Annual Research in Computational Molecular Biology (RECOMB) Comparative Genomics Satellite Workshop: bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-19-supplement-6.
Faculdade de Computação, Universidade Federal de Mato Grosso do Sul, Campo Grande, MS, Brazil
Diego P. Rubert
, Edna A. Hoshino
& Fábio V. Martinez
Faculty of Technology and Center for Biotechnology (CeBiTec), Bielefeld University, Bielefeld, Germany
Marília D. V. Braga
& Jens Stoye
Search for Diego P. Rubert in:
Search for Edna A. Hoshino in:
Search for Marília D. V. Braga in:
Search for Jens Stoye in:
Search for Fábio V. Martinez in:
All authors developed the theoretical results and wrote the manuscript. EAH developed the ILP. DPR implemented the algorithms, devised and performed the experimental evaluation. All authors read and approved the final manuscript.
Correspondence to Fábio V. Martinez.
Additional file 1
APX-hardness proof of the ffdcj-similarity problem. (PDF 379 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Genome rearrangement
Double-cut-and-join
Family-free genomic similarity | CommonCrawl |
Registration & Dinner
Zaumseil
Coffee & Discussions
Climent
Lunch & Discussions
Groenhof
Ruggenthaler
Dinner, Bar & Discussions
Buhmann
Shegai
Sanz-Vicario
Cocktail Dinner, Bar & Discussions
Yuen-Zhou
Vendrell
Martín Cano
Fernández Domínguez
Lemeshko
Toggle all abstracts
Mon 09:00-09:45
Toggle abstract PDF
Claudiu Genes1 and Johannes Feist2
Max Planck Institute for the Science of Light, Erlangen, Germany
Departamento de Física Teórica de la Materia Condensada and Condensed Matter Physics Center (IFIMAC), Universidad Autónoma de Madrid, Madrid, Spain
General conference information and an introduction / overview / perspective of the field.
Exciton-Polariton Relaxation in Single-Walled Carbon Nanotube Networks
Jana Zaumseil1,2
Institute for Physical Chemistry, Universität Heidelberg, Heidelberg, Germany
Centre for Advanced Materials, Universität Heidelberg, Heidelberg, Germany
Semiconducting single-walled carbon nanotubes (SWCNTs) have extraordinary optical and electronic properties, such as large oscillator strength, narrow absorption and emission in the near-infrared and high charge carrier mobilities, that make them an excellent material for strong light-matter coupling and for optically and electrically created exciton-polaritons at room temperature [1,2]. Even charged polaritons based on trions (charged excitons) in a microcavity filled with doped SWCNTs could be demonstrated [3]. However, the underlying relaxation dynamics of SWCNT polaritons have not yet been investigated and it remains unclear under what conditions the fastest relaxation can be expected. We investigate the impact of various parameters, such as emitter density, detuning and excitation wavelength, on the occupation distribution and relaxation of polaritons in metal-clad microcavities with purified (6,5) SWCNTs by angle-resolved reflectivity and photoluminescence measurements. We find that – similar to molecular emitters – intrinsic phonons of the carbon nanotubes, that is the Raman-active D-mode (165 meV) and/or G-mode (197 meV) seem to play an important role for the relaxation dynamics. Furthermore, a high density of nanotubes in a network and thus their interaction with each other appears to enhance relaxation compared to cavities with equivalent Rabi splitting but with well-separated SWCNTs in a matrix. We compare our findings to recent theoretical predictions.
Figure: Schematic of (6,5) single-walled carbon nanotubes distributed in a microcavity and packed as a dense network.
A. Graf et. al. Nature Commun. 7, 13078 (2016).
A. Graf et. al. Nature Mater. 16, 911(2017).
C. Möhl et al. ACS Photonics 5, 2074 (2018).
Toggle abstract
Vibrational polaritons in the ultrastrong coupling regime: spectroscopy and chemistry
Felipe Herrera1,2
Department of Physics, Universidad de Santiago de Chile, Chile
Millennium Institute for Research in Optics, Concepción, Chile
We develop a fully-quantum approach to describe ultrastrong light-matter coupling between anharmonic molecular vibrations and multimode infrared cavities in the PWZ frame. The method uses grid-based vibrational wavefunctions in nuclear coordinate space to construct a multi-mode and multi-level many-body quantum Rabi model. In this talk, we will describe the method and discuss its predictions on the coordinate representation of vibrational polaritons, their spectroscopic properties and their role in ground and excited state chemical reactivity.
Figure: (a) Level scheme for ultrastrong coupling of a Morse oscillator with dissociation energy De and a single-mode infrared cavity. (b) Light-matter coupling between bare anharmonic vibrational manifolds in the Rabi model.
M. Litinskaya and F. Herrera, Vacuum-enhanced optical nonlinearities with organic molecular photoswitches, Phys. Rev. B 99, 041107(R), (2019).
F. Herrera and F. C. Spano, Theory of nanoscale organic cavities: The essential role of vibration-photon dressed states, ACS Photonics 5, 65, (2018).
Cavity-modified ground-state chemical reactivity
Clàudia Climent, Javier Galego, Francisco J. Garcia-Vidal, Johannes Feist
In this talk I will discuss how ground-state chemical reactivity may be modified in a cavity QED scenario [1]. I will first show, for a simplified model molecule, how energy barriers are modified when a molecular vibration and a cavity mode are coupled. With the aid of the cavity Born–Oppenheimer approximation [2], one can then rely on transition state theory to calculate the corresponding reaction rates. This is a very important point since it allows to study realistic molecular systems with quantum chemistry methods. I will then discuss the absence of resonance effects on the reaction rates and how, within perturbation theory, one can understand the energetic modifications we observe making a connection to Casimir-Polder interactions. In the second part of the talk, I will focus on specific examples of interest for chemists, which we have studied by interfacing the above theory with quantum chemistry calculations [3]. In particular, I will discuss how nucleophilic substitution reactions can be catalyzed with plasmonic nanocavities. I will finally show how our proposal can serve as a novel strategy to modify static molecular properties, focusing on the transition temperature T1/2 of spin-crossover transition metal complexes.
J. Galego, C. Climent, F. J. Garcia-Vidal, J. Feist, Phys. Rev. X 9, 021057 (2019).
J. Flick, H. Appel, M. Ruggenthaler, A. Rubio, J. Chem. Theory Comput. 13, 1616 (2017).
C. Climent, J. Galego, F. J. Garcia-Vidal, J. Feist, Angew. Chem. Int. Ed. 58, 8698 (2019).
Semi-classical Molecular Dynamics Simulations of Polaritonic Chemistry
Gerrit Groenhof1, Johannes Feist2, Jussi Toppari1
University of Jyväskylä, Finland
Universidad Autónoma de Madrid, Spain
When photoactive molecules interact strongly with confined light modes in optical cavities new hybrid light-matter states form, the polaritons. These polaritons are coherent superpositions of excitations of the molecules and of the cavity mode. Because light-matter hybridization can change the potential energy surface with respect to the bare molecules, polaritons are considered a promising paradigm for controlling photochemical reactions. To gain insight into the effects of strong coupling on the reactivity of molecules, we have extended the Tavis-Jaynes-Cummings model to an all-atom hybrid quantum chemistry / molecular mechanics approach, capable of simulating thousands of molecules in optical cavities. After presenting our model, we will discuss recent simulations that illustrate how the dynamics of large ensembles of molecules is affected by their strong interaction with the confined light modes of the cavity.
Modelling organic polaritons from weak to strong coupling
Peter Kirton
Atominstitut, TU Wien, Vienna, Austria
I will present analysis of a microscopic model that allows us to explore the crossover from weak to strong matter-light coupling of organic polaritons [1]. By considering a nonequilibrium Dicke-Holstein model, including both strong coupling to vibrational modes and strong matter-light coupling, I will explain the phase diagram of this model in the thermodynamic limit. I will discuss the mechanism of polariton lasing, uncovering a process of self-tuning, and identify the relation and distinction between regular dye lasers and organic polariton lasers.
If time allows I will also present an overview of our new numerical method for exactly simulating small quantum systems strongly coupled to their environment which could have potential application to simulating molecular devices [2].
Figure: Phase diagram of a model of organic microcavity lasing, showing the evolution from weak to strong light-matter coupling.
A Strashko, P Kirton, and J Keeling, Phys. Rev. Lett. 121, 193601 (2018)
A Strathearn, P Kirton, D Kilda, J Keeling and BW Lovett, Nat. Commun. 9, 3322 (2018)
Ab-initio Quantum Electrodynamics: Beyond the Model Paradigm
Michael Ruggenthaler1, Christian Schäfer1, Rene Jestädt1, Vasil Rokaj1, Davis Welakuh1, Markus Penz1, Johannes Flick2, Michael Sentef1, Heiko Appel1 and Angel Rubio1,3,4
Max-Planck-Institute for the Structure and Dynamics of Matter, Hamburg, Germany
Department of Chemistry, Harvard University Cambridge, USA
Nano-Bio Spectroscopy Group and ETSF, UPV, San Sebastian, Spain
Center for Computational Quantum Physics, Flatiron Institute, New York, USA
In this talk I will give a brief introduction into non-perturbative quantum electrodynamics (QED) for low-energy physics [1], argue why it is important to go beyond model-system approaches to coupled light-matter systems [2,3,4], and show recent advances in solving the resulting quantum-field equations non-perturbatively and accurately [5,6]. More specifically, I will highlight how physical observables change upon treating all major degrees of freedom (electrons, nuclei and photons) in coupled light-matter systems self-consistently, provide some representative examples of changes in physical [7] and chemical [8] properties, and stress how by using first principles long-standing important problems can be solved [4].
If time permits, I will also discuss some further theoretical [9,10] and mathematical [11,12,13] advances in describing coupled fermion-boson systems from first-principles.
Figure: Snapshot of a coupled photon-electron-nucleus simulation of two sodium (297 atoms each) dimers perturbed by a weak external laser pulse (uppermost panel), the induced current (upper left), difference electron-localization function (upper right), difference electric field in z-direction (lower left) and Maxwell-energy difference (lower right). Details are explained in Ref.[5].
M. Ruggenthaler, N. Tancogne-Dejean, J. Flick, H. Appel and A. Rubio, Nature Reviews Chemistry 2, 0118 (2018).
V. Rokaj, D.M. Welakuh, M. Ruggenthaler and A. Rubio, J. Phys. B 51 (3), 034005 (2018).
C. Schäfer, M. Ruggenthaler and A. Rubio, Phys. Rev. A 98 (4), 043801 (2018).
V. Rokaj, M. Penz, M.A. Sentef, M. Ruggenthaler and A. Rubio, arXiv preprint arXiv:1808.02389 (2018).
R. Jestädt, M. Ruggenthaler, M.J.T. Oliveira, A. Rubio and H. Appel, arXiv preprint arXiv:1812.05049 (2018).
J. Flick, D.M. Welakuh, M. Ruggenthaler, H. Appel and A Rubio, arXiv preprint arXiv:1803.02519 (2018).
M.A. Sentef, M. Ruggenthaler and A. Rubio, Science Advances 4 (11), eaau6969 (2018).
C. Schäfer, M. Ruggenthaler, H. Appel and A. Rubio, PNAS 116 (11), 4883 (2019).
S.E.B. Nielsen, C. Schäfer, M. Ruggenthaler and A. Rubio, arXiv preprint arXiv:1812.00388 (2018).
F. Buchholz, I. Theophilou, S.E.B. Nielsen, M. Ruggenthaler and A Rubio, arXiv preprint arXiv:1812.05562 (2018).
A. Laestadius, M. Penz, E.I. Tellgren, M. Ruggenthaler, S. Kvaal and T. Helgaker, J. Chem. Phys. 149 (16), 164103 (2018).
K. Giesebertz and M. Ruggenthaler, Phys. Rep. in press (2019).
M. Penz, A. Laestadius, E.I. Tellgren and M. Ruggenthaler arXiv preprint arXiv:1903.09579 (2019).
Tue 15:30-16:15
Macroscopic quantum electrodynamics: Engineering atom–field interactions
Stefan Yoshi Buhmann1,2
Institute of Physics, University of Freiburg, Freiburg, Germany
Freiburg Institute for Advanced Studies (FRIAS), Freiburg, Germany
Macroscopic quantum electrodynamics describes the interaction of light (virtual or real photons) with microscopic (atoms, molecules, quantum dots …) and macroscopic objects (bodies, media, surfaces …) [1]. Within this framework, the quantum electromagnetic field is generated by the noise polarisation inside the present bodies and media and propagated by means of the classical Green tensor. The fundamental polariton-like field–matter excitations can be described by Bosonic creation and annihilation operators, and the field interacts with microscopic particles via the multipolar coupling scheme.
I will give a brief introduction to this formalism and some of its recent applications, including the enhancement of quantum friction by surface plasmons [2]; collective and non-additive atom–surface and atom–field interactions [3,4,5]; environment-assisted resonance energy transfer [6]; and photonic Bose–Einstein condensation.
Figure: Schematic view of macroscopic QED
S. Scheel and S. Y. Buhmann, Acta Phys. Slovaka 58(5), 675 (2008).
S. Scheel and S. Y. Buhmann, Phys. Rev. A 80(4), 042902 (2009).
S. Fuchs, R. Bennett, R. V. Krems, and S. Y. Buhmann, Phys. Rev. Lett. 121(8), 083603 (2018).
S. Fuchs and S. Y. Buhmann, Europhys. Lett. 124(3), 34003 (2018).
S. Esfandiarpour, H. Safari, R. Bennett, and S. Y. Buhmann, J. Phys. B 51(9), 094004 (2018).
J. L. Hemmerich, R. Bennett, and S. Y. Buhmann, Nature Commun. 9, 2934 (2018).
Suppression of photo-oxidation of organic dyes under strong plasmon-molecule coupling
Timur Shegai
Chalmers University of Technology, Göteborg, Sweden
In this talk, I will focus on our recent experimental observation of significant slow-down of photobleaching of organic dyes under strong coupling conditions [1]. The specific organic chromophores that we use in our experiments are TDBC cyanine dyes in their J-aggregated form. As plasmonic nanostructures, we used single crystalline silver nanoprisms (about 70 nm side length and 10 nm thickness). Upon mixture of J-aggregates with plasmonic nanoparticles, we obtain hybrid systems displaying Rabi splitting of about 200 meV (Fig. 1). We further study the effect of photobleaching as a function of Rabi splitting, plasmon-exciton detuning, and excitation wavelength on the photobleaching rate. All experiments are performed on the individual nanoparticle level. We find that photobleaching is slower for hybrids with higher Rabi splitting, thus supporting the idea of "collective protection" in the strong coupling regime [2]. Furthermore, I will discuss our recent progress on transition metal dichalcogenide materials (TMDCs) and their use for light-matter interactions. I will in particular focus on the self-hybridization in multilayer flakes [3] and nanodisks [4] fabricated out of WS2, where peculiar anapole-exciton strong coupling was observed.
Figure: Figure 1. (a) Graphic sketch of the system under study. J-agg = J-aggregates. (a) Dark-field scattering spectra of the strongly coupled hybrid system (red), scattering spectrum from uncoupled J-aggregates (orange), and uncoupled individual plasmonic nanoprism (blue). The inset shows an SEM image of the corresponding nanoprism. Scale bar, 100 nm. (c) Schematic diagram of a photobleaching reaction in the uncoupled molecular system and its possible modification in the strong coupling regime (light gray lines indicate upper and lower polaritonic states). ROS = reactive oxygen species; CT = charge transfer; ET = energy transfer.
Munkhbat B., et al, Science Advances 4 (7), eaas9552
Galego J., et al, Nat. Commun. 7, 13841 (2016)
Munkhbat B., et al, ACS Photonics, 2019, 6 (1), pp 139–147
Verre R., et al, arXiv preprint arXiv:1812.04076
Quantum Control with Quantum Light of Non-Adiabaticity in Molecules
András Csehi1, Gábor J. Halász1, Ágnes Vibók1, and Markus Kowalewski2
Department of Theoretical Physics, University of Debrecen, Hungary
Department of Information Technology, University of Debrecen, Hungary
Department of Physics, Stockholm University, Sweden
Coherent control in molecules is usually done with laser fields. The electric field is described classically and control over the time evolution of the system is achieved by shaping the phase and amplitude of laser pulses in the time or frequency domain. Moving on from a classical description to a quantum description of the light field enables us to engineer the quantum state of light and allows to manipulate the light-matter interaction in phase space instead. In this contribution we will demonstrate the different principles of control with quantum light on the avoided crossing in lithium fluoride. Using a quantum description of light together with the non-adiabatic couplings and vibronic degrees of freedoms opens up new ways to look at quantum control. We will show the deviations from control with purely classical light field and how back action of the light field becomes important in a few photon regime.
M. Kowalewski, K. Bennett, S. Mukamel, J. Phys. Chem. Lett., 7, 2050 (2016)
J.F. Triana, D. Pelaez, J.L. Sanz-Vicario, J. Phys. Chem. A, 122, 2266 (2018)
Ab initio polariton dynamics of diatomic polar molecules in quantum cavities
José L. Sanz-Vicario and Johan F. Triana
Instituto de Física, Universidad de Antioquia, Medellín, Colombia
Molecular polaritonics where the quantum nature of light plays a crucial role is making a significant breakthrough as a relatively new brand in molecular and optical physics, triggered by new theoretical and experimental advances that involve atoms and molecules in QED cavities, phonon polariton nanoresonators, photosynthetical and light harvesting systems, etc. Many of these systems and their processes have been explored mostly by physicist, extracting the coarse grained relevant physics from simple Hamiltonian models in quantum optics.
More rigorous ab initio treatments of the molecular photodynamics with quantum light needs to bring together well-established methods from quantum chemistry along with efficient multimode time propagation methods to solve the complex dynamic equations. Molecular wave packet dynamics using classical fields within the dipole approximation has become the standard tool for decades, to understand the inner workings of the molecular photoreactivity involving excited states and conical intersections (CoIns) [1]. In this respect, similar to the classical case, quantum light can also induce potential crossings or CoIns among light dressed states. These light induced non-adiabating effects compete with the permanent non-adiabatic couplings at avoided crossings or at CoIns, usually producing much faster photochemical cycles. Moreover, field induced avoided crossings and CoIns are more than a theoretical tool to interpret results, its physical realm can be established in experiments, and at least already performed with classical fields [2].
In this occassion we discuss the ab initio polariton dynamics of the LiF molecule inserted in QED cavities. We conclude in our showcase that in spite of the remarkable good comparison of population dynamics by using a classical field and quantum radiation, the latter in the form of Fock states (for which a simple scaling rule applies) this no longer happens for other quantum states of radiation represented as a superposition of Fock states (coherent, squeezed states, etc.) [3] This issue bring us to the question on what are the general conditions under which a classical and a quantum field (whatever its form) interacting with an atomic or molecular system provide an equivalent dynamical response. We also propose a three-state pump-probe laser experiment for LiF passing through a cavity to make the formation of light induced crossings evident due to the enhancement of the observable dissociation yields in LiF fragmentation channels [4] (see figure, where several peaks appear at given pump-probe time delays, related to the passage of the polariton wave packet across the light induced crossing).
Finally, the effect of adding the rotational degree of freedom to the LiF molecule, which amounts to introduce a rotational light induced conical interaction [5], is analyzed. We find that the rotationally induced CoIns in diatomics have not the prototypical behavior of standard undressed CoIns in polyatomic molecules.
Figure: Final population of the LiF $^1\Pi$ excited state leading to dissociation as a function of the pump-probe time delay $\tau$ for two cavity mode frequencies $\omega_c$ using different probe laser frequencies. Numerical values of delays for the first three maxima in the dissociation yields are indicated.
W. Domcke, H. Koppel and D. R. Yarkony, Conical Intersections: Electronic Structure, Dynamics and Spectroscopy, World Scientific, Singapore (2004)
A. Natan et al., Phys. Rev. Lett. 116, 143004 (2016)
J. F. Triana, D. Peláez and J. L. Sanz-Vicario, J. Phys. Chem. A 122, 2266 (2018)
J. F. Triana and J. L. Sanz-Vicario, Phys. Rev. Lett. 122, 063603 (2019)
C.-C. Shu et al., J. Phys. Chem. Lett. 8, 1 (2017)
Laser refrigeration using exciplex resonances in gas filled hollow-core fibres
Christian Sommer1, Nicolas Y. Joly1,2, Helmut Ritsch3, Claudiu Genes1
University of Erlangen-Nuremberg, Erlangen, Germany
Universität Innsbruck, Innsbruck, Austria
We theoretically study prospects and limitations of a new route towards macroscopic scale laser refrigeration based on exciplex-mediated frequency up-conversion in gas filled hollow-core fibres. Using proven quantum optical rate equations we model the dynamics of a dopant-buffer gas mixture filling an optically pumped waveguide. In the particular example of alkali-noble gas mixtures, recent high pressure gas cell setup experiments have shown that efficient kinetic energy extraction cycles appear via the creation of transient exciplex excited electronic bound states. The cooling cycle consists of absorption of lower energy laser photons during collisions followed by blue-shifted spontaneous emission on the atomic line of the alkali atoms. For any arbitrary dopant-buffer gas mixture, we derive scaling laws for cooling power, cooling rates and temperature drops with varying input laser power, dopant and buffer gas concentration, fibre geometry and particularities of the exciplex ground and excited state potential landscapes.
Figure: Cooling process. Dynamics of a M-X collision process showing a ground state X atom (modelled as a normalized Gaussian wave packet of group velocity $v_0$ and initial spread $\delta v$) approaching an atom M initially in the ground state. Around the turning point, exciplex ground-excited transitions are induced by a laser at frequency $\omega_{\text{L}}$ in a time window $\tau$. Following absorption at frequency $\omega_{\text{L}} < \omega_0$ an exciplex is formed with an excited state lifetime of $\tau_{\gamma}=\gamma^{-1}$. The outgoing wavepacket is composed of a small component containing excited state contribution owed to successful absorption of a photon (magnified in the illustration by a factor $10^4$) and a large amplitude ground state components. Spontaneous emission at rate $\gamma$ leads to an effective energy loss of $\Omega=\omega_0-\omega_{\text{L}}\leq D_e^{(e)}$.
C. Sommer, N.Y. Joly, H. Ritsch and C. Genes, arXiv:1902.01216 (2019)
Chemistry with vibrational polaritons
Yuen-Zhou Joel
University of California San Diego, CA, USA
When large numbers of molecules strongly couple to optical cavity modes, new excited states with hybrid light and matter character (polaritons) emerge. A promise of the new field of polariton chemistry is thus that one will be able to design optical cavities that by merely hosting molecules within them, will be able to alter their chemical processes without invoking costly synthetic modifications to the molecules, instead inducing chemical modifications via the polariton states. However, a feature that is not emphasized enough is that polariton states appear at the price of having macroscopic reservoirs of dark-states that are not coupled to light and whose chemical dynamics are essentially identical to those of the bare molecules in the thermodynamic limit. In fact, for infrared microcavity systems, there are about N=10^6-10^10 dark states per polariton states, so the latter should dominate the thermodynamics of these systems. This observation prompts the question: How can macroscopic changes in chemical kinetics be observed under vibrational strong coupling under thermal equilibrium conditions without external photon pumping [1]? In this talk, I will highlight some possible mechanisms that lend themselves to polaritonic control under thermally-activated conditions [2].
If time permits, I will also highlight some work on nonlinear optical properties of vibrational polaritons [3-5] and remote controlling chemical reactions between different cavities [6].
Figure: Optical pumping of the vibrations of a "remote catalyst" in one cavity activates the cis-trans HONO isomerization in another cavity.
A. Thomas, L. Lethuillier-Karl, K. Nagarajan, R. M. A. Vergauwe, J. George, T. Chervy, A. Shalabney, E. Devaux, C. Genet, J. Moran, T. W. Ebbesen, Science 363, 615−619 (2019)
J. Campos-González-Angulo, R. F. Ribeiro, and J. Yuen-Zhou, arXiV:1902.10264 (2019)
B. Xiang, R. F. Ribeiro, A. D. Dunkelberger, J. Wang, Y. Li, B. S. Simpkins, J. C. Owrutsky, J. Yuen-Zhou, W. Xiong, Proc. Nat. Acad. Sci. 201722063 (2018)
R. F. Ribeiro, A. D. Dunkelberger, B. Xiang, W. Xiong, B. S. Simpkins, J. C. Owrutsky, J. Yuen-Zhou, J. Phys. Chem. Lett. 9, 13, 3766–3771 (2018)
B. Xiang, R. F. Ribeiro, Y. Li, A. D. Dunkelberger, B. B. Simpkins, J. Yuen-Zhou, W. Xiong, arXiV:1901.05526 (2019)
M. Du, R. F. Ribeiro, J. Yuen-Zhou, Chem (2019)
Collective Non-Adiabatic Interactions through Light-Matter Coupling in a Cavity
Oriol Vendrell
To steer and to control the outcome of ultrafast photo-chemical reactions has been for a long time one of the main goals of laser chemistry. In this talk we address the possibility to modify the dynamics of photo-physical and photo-chemical processes in a molecular ensemble strongly coupled to a quantised electromagnetic mode, e.g., to a cavity mode.
The dynamics of such molecular ensembles is approached from a theoretical perspective and on the basis of full-dimensional quantum dynamics simulations involving all nuclei, electronic states and quantised electromagnetic modes of the corresponding system. The emergence of collective effects is illustrated by comparing the non-radiative relaxation dynamics of one and up to a few coupled molecules in a cavity.
In the regime in which a few molecules are strongly coupled, the non-radiative relaxation rate from the upper to the lower polaritonic states is found to increase with the number of molecules. This can be explained from the perspective that the coupling among bright and dark polaritonic states through nuclear motion constitutes a special case of (pseudo-)Jahn-Teller interactions. This effect leads, in the case of a photo-chemical reaction started from the upper polaritonic state of the hybrid system, to a time-delay in the nuclear wave-packet evolution as compared to the same reaction for bare molecules or started from the lower polaritonic state.
Subsequently, the interplay between cavity-induced relaxation channels and naturally occurring conical intersections in polyatomic organic molecules is discussed, comparing the time-evolution of wavepackets transferred to either the upper or lower polaritonic states with the dynamics in absence of cavity coupling.
Figure: Schematic representation of the molecular Tavis-Cummings Hamiltonian highlighting its non-zero entries and emphasizing its arrowhead-matrix form.
O. Vendrell, Chem. Phys., 509, 55-65 (2018)
O. Vendrell, Phys. Rev. Lett., 121, 253001 (2018)
Enhancing the quantum coherence of organic molecules with nanophotonic structures
Diego Martin-Cano
The optical coherence is inherent in the quantum phenomena that arise from the interaction between light and matter. To maintain and control such property is crucial for the development of quantum optical technologies, since it is intrinsically involved in the generation of nonclassical features that allow surpassing the capabilities of classical systems. An experimental approach to access this quantum coherence is to interact light with single electronic transitions available in organic molecules at optical frequencies. However, the coherence of these transitions is commonly hindered by poor optical matching with the environment and vibrational interactions arising from the organic crystals that host the molecular impurities. In this talk I present some of our theoretical attempts to propose nanophotonic structures that counteract such loss of quantum coherence [1,2] and to provide fundamental measures to test it both at the single and multiple emitter level [3,4].
D. Wang, H. Kelkar, D. Martin-Cano, D. Rattenbacher, A. Shkarin, T. Utikal, S. Götzinger, V. Sandoghdar, Nature Phys. 15, 483 (2019).
B. Gurlek, V. Sandoghdar, D. Martín-Cano, ACS Photon., 5, 456 (2018).
D. Martín-Cano, H.R. Haakh, K. Murr, M. Agio, Phys. Rev. Lett., 113, 263605 (2014).
H. Haakh, D. Martín-Cano, ACS Photon., 2, 1686 (2015).
Plasmon-Exciton Polaritons at the Single-Molecule Level
Antonio I. Fernández-Domínguez
Plasmonic nanostructures have become promising candidates for polaritonic cavities, as they enable light-matter coupling strengths well beyond those provided by semiconductor devices. These nanocavities sustain electromagnetic modes with effective volumes much smaller than optical wavelengths. As demonstrated by recent experimental reports [1-2], this allows the formation of polaritons at the single-molecule level and at room temperature. In this context, my talk will be structured around the simple and well-known expression for the Rabi frequency, \begin{equation} \Omega_{\rm R}=\left(\boldsymbol{\mu}\cdot\boldsymbol{E_{\rm SP}}\right)\sqrt{N}, \end{equation} and will present our recent theoretical work on the three terms that comprise it. First, I will introduce our analytical description of the full richness of the electromagnetic spectrum supported by plasmonic cavities, $\boldsymbol{E_{\rm SP}}$, and will reveal the geometric and material conditions most convenient for the onset of strong coupling [3]. Next, I will show the remarkable robustness of far-field photon correlations against the increasing number, $N$, of molecules interacting with the nanocavity, a promising result towards the realization of single-photon sources beyond the single-emitter regime [4]. Finally, I will discuss how the tightly confined nature of surface plasmons leads to light-matter coupling phenomena revealing molecular responses beyond the dipole, $\mu$, approximation. Namely, the formation of polaritons involving light-forbidden exciton transitions [5] or the emergence of quasi-chiral interactions in circularly polarized emitters [6].
Figure: A molecule, modelled as a three-level system supporting a light-allowed and light-forbidden exciton transition, is placed at the gap of a metallic nanoparticle-on-a-mirror cavity.
R. Chikkaraddy et al. Nature 535, 127 (2016).
H. Gross, et al. Sci. Adv. 4, eaar4906 (2018).
R.-Q. Li et al. Phys. Rev. Lett. 117, 107401 (2016).
R. Sáez-Blázquez et al. Optica 4, 1363 (2017).
A. Cuartero-González et al. ACS Photonics 5, 3415 (2018).
C. A. Downing et al. Phys. Rev. Lett. 122, 057401 (2019).
Quasiparticle approach to molecules rotating in quantum solvents
Mikhail Lemeshko
IST Austria, Am Campus 1, Klosterneuburg, Austria
Recently we have predicted a new quasiparticle - the angulon - which is formed when a quantum impurity (e.g. a molecule, atom, or electron) exchanges its angular momentum with a many-particle environment (such as lattice phonons or collective excitations in a liquid) [1,2]. Soon thereafter we obtained strong evidence that angulons are formed in experiments on molecules trapped inside superfluid helium nanodroplets [3].
In my talk, I aim to introduce the concept of angulon quasiparticles and to demonstrate how complex problems of far-from-equilibrium many-body dynamics can be simplified using this concept. In addition, I will describe novel physical phenomena that arise in molecules interacting with superfluid helium [1,5,6], as well as possible connections between matrix isolation spectroscopy and non-equilibrium magnetism.
R. Schmidt, M. Lemeshko, Phys. Rev. Lett. 114, 203001 (2015)
R. Schmidt, M. Lemeshko, Phys. Rev. X 6, 011012 (2016)
M. Lemeshko, Phys. Rev. Lett., 118, 095301 (2017); Viewpoint: Physics 10, 20 (2017)
B. Shepperson, A. A. Sondergaard, L. Christiansen, J. Kaczmarczyk, R. E. Zillich, M. Lemeshko, H. Stapelfeldt, Phys. Rev. Lett. 118, 203203 (2017)
E. Yakaboylu, M. Lemeshko, Phys. Rev. Lett. 118, 085302 (2017)
E. Yakaboylu, A. Deuchert, M. Lemeshko, Phys. Rev. Lett. 119, 235301 (2017)
Numerical simulations of ultra-cold atom experiments: Applications to molecular polaritonics?
Johannes Schachenmayer
CNRS, IPCMS (UMR 7504), ISIS (UMR 7006), Université de Strasbourg, Strasbourg, France
In recent years rapid developments in the community of ultra-cold atom physics have led to experimental platforms for studying strongly correlated many-body quantum physics in clean and controlled environments. Nowadays, experiments allow to access non-equilibrium dynamics on a full many-body Hilbert space, and operate in regimes where exact diagonalization is clearly impossible due to the exponential growth of complexity with system size. Nevertheless, several numerical techniques have been developed that make it possible to still reproduce experimental measurements.
Here, I will review numerical approaches that have been very successful in the past years, in particular concepts relying on matrix product states in one-dimensional systems. For the higher dimensional situations we recently developed a semi-classical technique based on the truncated Wigner approximation (DTWA, [1]). This method has been remarkably successful in re-producing quench-dynamics for many-body quantum spin models. In particular I will present a comparison with an experimental setup of ultra-cold Chromium atoms trapped in an optical lattice [2]. I will discuss possible applications of this semi-classical technique to systems with light-matter coupling in driven-dissipative environments.
Figure: Density matrix elements for 7 Zeeman sub-levels in a single Chromium atom. The atom is one of 40 000 atoms trapped in an optical lattice, all interacting with each other via magnetic dipole-dipole couplings. We developed a new semi-classical method that can simulate the dynamics of this system, and perfectly describes experimental findings. As shown in the figure the method captures the decay of coherences, thus also the build-up of entanglement between the atoms.
J. Schachenmayer, A. Pikovski, and A. M. Rey, Phys. Rev. X 5, 011022 (2015)
S. Lepoutre, J. Schachenmayer, L. Gabardos, B. Zhu, B. Naylor, E. Marechal, O. Gorceix, A. M. Rey, L. Vernac, and B. Laburthe-Tolra, Nat. Comm. 10, 1714 (2019)
Modeling polariton lasing in a multimode cavity
Kristin B. Arnardottir, Jonathan Keeling
University of St Andrews, St Andrews, United Kingdom
Excitons in organic molecules placed in a microcavity can strongly couple to light, leading to the formation of polaritons. Organic polaritons have advantages compared to their inorganic counterparts, particularly for attaining lasing, due to their larger Rabi coupling, but they are more complicated to model. There has been success in modelling lasing and condensation in these systems, focusing on the coupling of a single photon mode with the molecules [1]. However, real cavities do not support only a single photon mode, and to understand questions such as the thermalisation of photon modes, we must consider multimode descriptions.
We investigate a model of polariton lasing, accounting for the effects of photon dispersion, where we have multiple photon modes with varying energies. Realistically, the different modes will all have a non-zero occupation. This means that the molecules in the cavity, which are spread out over space, will be interacting with modes with different wavevectors. Because of this, it is not possible to gauge out the spatial dependence of the phase of the exciton. We examine the implications of this. To fully account for the finite occupation of many modes it is also necessary to go beyond the mean-field theory, so we use an approach based on cumulant expansion. Using these approaches, we can answer questions regarding the thermalisation of polaritons, the occurrence of lasing at finite k, and multimode lasing.
A. Strashko, P. Kirton, and J. Keeling, Phys. Rev. Lett. 121, 193601 (2018).
Long distance heat transfer between molecular systems through a hybrid plasmonic-photonic nano-resonator
S. Mahmoud Ashrafi1, R. Malekfar1, A. R. Bahrampour2 and Johannes Feist3
Department of Physics, Tarbiat Modares University, Tehran, Iran
Department of Physics, Sharif University of Technology, Tehran, Iran
Departamento de Física Teórica de la Materia Condensada and Condensed Matter Physics Center (IFIMAC), Universidad Autónoma de Madrid, Spain
When molecules are placed in the hot spot of a nanoplasmonic cavity, the offresonant interaction between the molecular vibrations and the localized surface plasmon resonance (LSPR) can be described in the framework of optomechanics [1,2]. We show that for a collection of molecules coupled to the same LSPR, external driving of the plasmon resonance can induce an effective molecule-molecule interaction. This new heat transfer mechanism allows active control of the rate of heat flow between molecules through the intensity and frequency of the driving laser [3]. However, for a single LSPR, the distance between molecules is limited by the spatial extent of the modes, and the plasmon-induced heat transfer is found to be concomitant with significant heating of each molecule separately. We then show that hybrid plasmonic-photonic setups can overcome these limitations. The structure consists of two separated plasmonic nanoantennas located on top of a photonic crystal resonator [4]. The hybrid modes of this resonator can lead to much larger couplings than a photonic crystal mode, but with much higher quality factor than plasmonic modes, and additionally can extend over a large distance (~µm). We show that this leads to effective long-range heat transport that can be actively controlled at low pumping levels, and thus with only little additional heating of the molecules.
Figure: Sketch of a hybrid plasmonic-photonic nano-resonator containing two molecules placed in the hot spot of two bowtie nano-antennas on top of a photonic crystal cavity. The molecules are at different local temperatures, as schematically indicated by the colored circles (red: hot, blue: cold).
P. Roelli, C. Galland, N. Piro, and T. J. Kippenberg, Nature Nanotech. 11, 164 (2016).
M. K. Schmidt, R. Esteban, F. Benz, J. J. Baumberg, and J. Aizpurua, Faraday Discuss. 205, 31 (2017).
S. M. Ashrafi, R. Malekfar, A. R. Bahrampour, and J. Feist, arXiv:1810.09130 (2018)
M. K. Dezfouli, R. Gordon, and S. Hughes, Phys. Rev. A 95, 013846 (2017).
Diagrammatic Monte Carlo approach to molecular impurities in quantum many-body systems
G. Bighin1, T.V. Tscherbul2, and M. Lemeshko1
IST Austria (Institute of Science and Technology Austria), Am Campus 1, 3400 Klosterneuburg, Austria
Department of Physics, University of Nevada, Reno, Nevada 89557, USA
We introduce a Diagrammatic Monte Carlo (DiagMC) approach to molecular impurities, possessing rotational degrees of freedom [1]. The technique is based on a diagrammatic expansion [2] that merges the usual Feynman diagrams with the angular momentum diagrams known from atomic and nuclear structure theory, thereby incorporating the non-Abelian algebra inherent to quantum rotations. Due to the peculiar way in which angular momenta couple, the configuration space is larger with respect to most DiagMC applications, and a new class of updates is needed in order to span it completely.
We exemplify the technique by obtaining an all-coupling solution of the angulon model - essentially a molecular impurity in a quantum many-body environment - showing that our approach correctly recovers the strong-coupling limit. However, the technique is general and can be applied to a broad variety of systems possessing angular momentum degrees of freedom, thereby establishing a far-reaching connection between DiagMC techniques and molecular simulations. Potential applications of the formalism include a precise determination of static and dynamical properties of molecules immersed in superfluid helium or solid parahydrogen, as well as of Rydberg atoms in Bose-Einstein condensates.
Figure: Energy of the ground state (main panel) and of the first few excited states (inset) of the angulon Hamiltonian obtained using DiagMC (green circles) as a function of the dimensionless bath density, $\tilde{n}$, in comparison with the weak-coupling theory (blue) and the strong-coupling theory (red).
G. Bighin, T. V. Tscherbul, and M. Lemeshko, Phys. Rev. Lett. 121, 165301 (2018).
G. Bighin and M. Lemeshko, Phys. Rev. B 96, 085410 (2017).
An angulon quasiparticle perspective on impulsive molecular alignment in helium nanodroplets
I. Cherepanov1, G. Bighin1, L. Christiansen2, A. V. Jørgensen2, R. Schmidt3, H. Stapelfeldt2, M. Lemeshko1
IST Austria (Institute of Science and Technology Austria), Am Campus 1, 3400, Klosterneuburg, Austria
Department of Chemistry, Aarhus University, 8000 Aarhus C, Denmark
Max Planck Institute for Quantum Optics, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
Angular momentum redistribution is a key to understanding phenomena taking place in a variety of quantum systems, from collision-induced nuclear reactions to magnetism in condensed matter systems. Here we describe the alignment of a molecule trapped inside a superfluid helium droplet [1, 2] from the point of view of angular momentum transfer between the molecule and the many-body helium bath. A short off-resonant laser pulse induces molecular axis alignment by creating a broad rotational wave packet. The angular momentum gained from the electric field by the molecule can be then transferred to the helium bath via excitation of bosonic collective modes (phonons) with non-zero angular momentum. We developed a dynamical theory of angulons [3] – quasiparticles consisting of a rotating impurity dressed by a field of surrounding bath excitations. We demonstrate that the rotational wave packet dynamics observed in experiment cannot be understood in terms of interference of the rigid rotor states due to the strong interactions with surrounding helium. Our approach might be generalized to the case of linearly chirped laser pulses paving the way for studying an optical centrifuge for molecules in He droplets.
Figure: Alignment of an I2 molecule trapped in a He droplet by a 20 ps laser pulse: experiment (dashed), theory (solid) and free rotor evolution (dotted)
D. Pantlehner et al., Phys. Rev. Lett. 110, 093002 (2013).
B. Shepperson et al., Phys, Rev. Lett. 118, 203203 (2017).
R. Schmidt, M. Lemeshko, Phys. Rev. Lett. 114, 203001 (2015).
Controlling chemistry using strong light matter coupling
Arpan Dutta, Luis Duarte, J. Jussi Toppari, Gerrit Groenhof
Nanoscience Center, Department of Physics and Chemistry, University of Jyvaskyla, Jyvaskyla, Finland
Strong coupling between photoactive molecules and cavity photons results hybrid light-matter states 'polaritons' which are optically accessible and have an energy separation defined by Rabi splitting [1]. Such modification of the molecular excited states due to hybridization with the cavity photon can alter the chemical behavior of the molecule [2]. To gain control over the excited-state molecular dynamics and the outcome of the studied reaction, amount of Rabi splitting can be tuned by the mode volume of the confined photon, molecular concentration, or the transition dipole moment of the molecule [2-5]. In this work, we investigated the way of controlling the chemical reactivity via strong coupling by designing microcavities based on parameters obtained from Quantum Mechanics/Molecular Mechanics atomistic simulations, fabricating cavities with different concentrations of photoactive molecules and studying the effects of strong coupling on molecular dynamics via optical/fluorescence spectroscopy.
D. S. Dovzhenko, et al., Nanoscale 10, 3589 (2018)
R. F. Ribeiro, et al., Chem. Sci. 9, 6325 (2018)
J. A. Hutchison et al., Angew. Chem. Int. Ed. 51, 1592 (2012)
T. Schwartz, et al., Phys. Rev. Lett. 106, 196405 (2011)
G. Groenhof and J. J. Toppari, J. Phys. Chem. Lett. 9, 4848 (2018)
Emitter-centered EM modes as a minimal basis for multi-emitter QED in arbitrary dielectric environments
Johannes Feist, Antonio I. Fernández Domínguez, Francisco J. García Vidal
Departamento de Física Teórica de la Materia Condensada and Condensed Matter Physics Center (IFIMAC), Universidad Autónoma de Madrid
The well-established approach of macroscopic QED provides a recipe for quantizing the electromagnetic (EM) field in any geometry, including with lossy materials [1]. However, the price to pay is that formally, the EM field is described by an infinite set of bosonic modes, each one associated to a point in space and a frequency, $\hat{\mathbf{f}}(\mathbf{r},\omega)$, which are coupled through the classical EM Green's function to the emitters. Collecting and extending results from the recent literature [2-4], we show that for a collection of N dipolar emitters, it is possible to explicitly form a minimal set of N "emitter-centered" quantized modes for each frequency, with all other modes uncoupled from the emitters. In particular, we show that it is possible to explicitly construct the electric field profiles associated with these modes, allowing direct calculation of EM field expectation values at arbitrary spatial locations. This approach thus provides a minimal formally exact set of quantized modes in arbitrary geometries, which can either form the starting point for further approximations or be treated with advanced numerical techniques capable of describing many photonic degrees of freedom.
S. Scheel and S. Buhmann, Acta Phys. Slovaca 58, 675 (2008).
T. Hümmer, F. J. García-Vidal, L. Martín-Moreno, and D. Zueco, Phys. Rev. B 87, 115419 (2013).
B. Rousseaux, D. Dzsotjan, G. Colas des Francs, H. R. Jauslin, C. Couteau, and S. Guérin, Phys. Rev. B 93, 045422 (2016).
A. Castellini, H. R. Jauslin, B. Rousseaux, D. Dzsotjan, G. C. des Francs, A. Messina, and S. Guérin, Eur. Phys. J. D 72, 223 (2018).
Ultrastrong nonlinear light-matter interaction
S. Felicetti
Nonlinear light-matter interactions have so far been considered only as arising from second- or higher-order effects in driven systems, and so limited to extremely small coupling strengths. However, a variety of novel physical phenomena emerges in the strong or ultrastrong coupling regime, where such coupling values become comparable to dissipation rates or to the system bare frequencies, respectively. In the ultrastrong coupling regime of nonlinear interactions a spectral collapse [1] can take place, i.e. the system discrete spectrum can collapse in a continuous band. Furthermore, in the many-body limit, nonlinear interactions are characterized by a rich interplay between the spectral collapse and the superradiant phase transition [2]. The physics of nonlinear interaction models can be effectively reproduced using atomic systems [3,4], however the implementation of two-photon couplings in an undriven system requires an interaction more complex than dipolar. In this contribution, we will present a solid-state device able to implement a nonlinear ultrastrong interaction between an artificial atom and a microwave resonator [5]. An open quantum system analysis of a nonlinear light-matter interaction model shows that fundamental quantum optical phenomena are qualitatively modified with respect to standard dipolar coupling [6]. We find that realistic parameters allow to reach the spectral collapse, where extreme nonlinearities are expected to emerge at the few-photon level.
I. Travěnec, Phys. Rev. A 85, 043805 (2012)
L. Garbe, I. L. Egusquiza, E. Solano, C. Ciuti, T. Coudreau, P. Milman, S. Felicetti, Phys. Rev. A 95, 053854 (2017)
S. Felicetti, J. S. Pedernales, I. L. Egusquiza, G. Romero, L. Lamata, D. Braak, and E. Solano, Phys. Rev. A 92, 033817 (2015)
L. Puebla, M.-J. Hwang, J. Casanova, M. Plenio, Phys. Rev. A 95, 063844 (2017)
S. Felicetti, D. Z. Rossatto, E. Rico, E. Solano, and P. Forn-Díaz, Phys. Rev. A 97, 013851 (2018)
S. Felicetti, M.-J. Hwang, and A. Le Boité, Phys. Rev. A 98, 053859 (2018)
Study of non-Markovian dynamics in organic polaritons induced by a multimode coherent field
R. E. F. Silva1, Javier del Pino1, Florian A. Y. N. Schröder2, Alex W. Chin2,3, Francisco J. Garcia-Vidal1,4, Johannes Feist1
Departamento de Física Teórica de la Materia Condensada and Condensed Matter Physics Center (IFIMAC), Universidad Autónoma de Madrid, E-28049 Madrid, Spain
Cavendish Laboratory, University of Cambridge, J. J. Thomson Avenue, Cambridge, CB3 0HE, UK
Institut des NanoSciences de Paris, Sorbonne Université, 4 place Jussieu, boîte courrier 840, 75252, PARIS Cedex 05, France
Donostia International Physics Center (DIPC), E-20018 Donostia/San Sebastián, Spain
Organic polaritons are hybrid light-matter states that present several advantages with respect to semiconductor polaritons. Among them, they present larger binding energies and allow for larger Rabi splitting. For a deeper understanding, the study of the dynamics of these systems is desirable, and little is known about their non-Markovian dynamics. In particular, the excitation from the groundstate of the system to the polaritonic states is the first step in any experiment and its modelling is the target of our work. To that purpose, the use of tensor network states is a fascinating tool that allows exact propagation of the system, treating thousands of degrees of freedom (plasmonic, phononic and radiative) on an equal footing [1]. In the current work, we show the results of the simulation of a prototype organic molecule that is excited with a coherent multimode laser field. To see how the absorption spectrum changes we do a scan over the central frequency of the pulse and we analyse the influence of the phononic degrees of freedom.
J. del Pino, F. A. Y. N. Schröder, A. W. Chin, J. Feist, and F. J. Garcia-Vidal, Phys. Rev. Lett. 121, 227401 (2018)
Resonance Energy Transfer, Interatomic Coulombic and Auger Decay: A Macroscopic-QED Approach
Janine Franz1, Stefan Y. Buhmann1,2, Akbar Salam1,2,3
Physikalisches Institut, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany
Freiburg Institute for Advanced Studies, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany
Department of Chemistry, Wake Forest University, Winston-Salem, NC, USA
Efficient processes for migration of excitation include resonance energy transfer (RET), in which a single virtual photon is exchanged between an excited donor and an unexcited acceptor, and interatomic Coulombic decay (ICD), in which an initially excited or inner-shell ionised donor relaxes, and an acceptor subsequently absorbs the energy resulting in the emission of a slow electron into the continuum. This slow electron can be damaging to biological tissue. The related, and in some cases competing Auger decay, describes a similar process but within a single atom or molecule which simultaneously acts as donor and acceptor. In this case the exchanged photon energy has to be in the X-ray region.
All of these processes may be influenced in an environment by surface or medium polaritons. We study them in the framework of macroscopic QED, where the polaritonic field-matter excitations are encoded in the Green's tensor for the electric and magnetic fields. We present an analytical expression for the Auger decay rate in vacuum as well as its enhancement and suppression by a nearby dielectric surface. We compare ICD rates to Auger rates in different environments. We also study discriminatory RET between chiral molecules and the modification of the free space rate by a solvent medium.
Phononic Modes of Molecular Organic Crystals
Burak Gurlek1,2, Vahid Sandoghdar1,2, Diego Martín-Cano1
Max Planck Institute for the Science of Light, Staudtstraße 2, D-91058 Erlangen, Germany
Friedrich Alexander University Erlangen—Nuremberg, D-91058 Erlangen, Germany
The interaction of light and matter at the nanometer scale can give rise to correlations and large nonlinearities that lie at the heart of quantum optics. At cryogenic temperatures, single dye molecules embedded in molecular organic crystals are one of the promising candidates for achieving such quantum effects due to their photostability, brightness and lifetime limited linewidths [1, 2]. However, the presence of vibrational energy levels of a single molecule and lattice vibrations of the host material limit the coherence for performing quantum processes. In this work, we will study numerically the phononic modes of molecular organic crystals to understand their characteristics in such quantum systems. We will also discuss new structures towards manipulating their fundamental phononic properties and creating schemes that improve the coherence of molecular platforms.
Figure: A high frequency phononic mode of a molecular organic crystal
D. Wang, H. Kelkar, D. Martin-Cano, D. Rattenbacher, A. Shkarin, T. Utikal, S. Götzinger and V. Sandoghdar, Nature Phys. 15, 483 (2019).
P. Türschmann, N. Rotenberg, J. Renger, I. Harder, O. Lohse, T, Utikal, S. Götzinger, and V. Sandoghdar, Nano Lett. 17 (8), 4941-4945 (2017)
Controlling the dynamics in the vicinity of quantum light-induced conical intersections
Gábor J. Halász1, András Csehi2,3, Markus Kowalewski4, Ágnes Vibók2,3
Department of Information Technology, University of Debrecen, PO Box 400, H-4002 Debrecen, Hungary
Department of Theoretical Physics, University of Debrecen, PO Box 400, H-4002 Debrecen
Hungary and ELI-ALPS, ELI-HU Non-Profit Ltd., Dugonics tér 13, H-6720 Szeged, Hungary
Department of Physics, Stockholm University, AlbaNova University Centre 106 91 Stockholm, Sweden
Nonadiabatic effects appear due to avoided crossings or conical intersections that are either intrinsic properties in field-free space or induced by a classical laser field in a molecule. It was demonstrated that avoided crossings in diatomics can also be created in an optical cavity. Here, the quantized radiation field mixes the nuclear and electronic degrees of freedom creating hybrid field-matter states called polaritons. We go further and create conical intersections in diatomics by means of a radiation field in the framework of cavity quantum electrodynamics (QED). By treating all degrees of freedom, that is the rotational, vibrational, electronic and photonic degrees of freedom on an equal footing we can control the nonadiabatic quantum light-induced dynamics by means of conical intersections. First, the pronounced difference between the the quantum light-induced avoided crossing and the conical intersection with respect to the nonadiabatic dynamics of the molecule is demonstrated. Second, we discuss the similarity between the classical and the quantum field description of the light for the studied scenario.
Kowalewski, M.; Bennett, K.; Mukamel, S. Cavity Femtochemistry: Manipulating Nonadiabatic Dynamics at Avoided Crossings. J. Phys. Chem. Lett. 2016, 7, 2050-2054.
Csehi, A.; Halász, G. J.; Cederbaum, L. S.; Vibók, Á. Competition between Light-Induced and Intrinsic Nonadiabatic Phenomena in Diatomics. J. Phys. Chem. Lett. 2017, 8, 1624.
András Csehi; Markus Kowalewski; Gábor J. Halász; Ágnes Vibók; Ultrafast dynamics in the vicinity of quantum light-induced conical intersections; submitted to New J. of Phys.
Multi-level quantum Rabi model for anharmonic vibrational polaritons
Federico Hernández1 and Felipe Herrera1,2
Department of Physics, Universidad de Santiago de Chile, Av. Ecuador 3493, Santiago, Chile.
Millennium Institute for Research in Optics MIRO, Chile.
Experiments have demonstrated that a confined electromagnetic vacuum is able to alter chemical reactions in the ground and excited electronic states, under conditions of strong light-matter coupling [1–8]. However, the underlying microscopic mechanisms for cavity-modified chemistry has yet to be fully understood. To address this we propose a cavity QED approach to describe light-matter interaction between an individual anharmonic non-polar but IR active vibration, such as the C=O stretching in the Fe(CO)$_5$ complex [9], and an infrared cavity field. Starting from a generic Morse oscillator with quantized nuclear motion, we derive a multi-level quantum Rabi model to study vibrational polaritons beyond the rotating-wave approximation. We analyze the spectrum of vibrational polaritons in detail. We also analyze polariton eigenstates in nuclear coordinate space and Fock space. We show that for bonds which do not experience a significant permanent dipole moment change while stretching, the bond length of a vibrational polariton at a given energy is never greater than the bond length of a Morse oscillator with the same energy. This type of polariton bond hardening occurs at the expense of the creation of virtual infrared cavity photons and may have implications in chemical reactivity of polariton states.
A. Canaguier-Durand, E. Devaux, J. George, Y. Pang, J. A. Hutchison, T. Schwartz, C. Genet, N. Wilhelms, J. M. Lehn, and T. W. Ebbesen. Angew. Chem.-Int. Ed., 52(40):10533–10536 (2013)
A. D. Dunkelberger, B. T. Spann, K. P. Fears, B. S. Simpkins, and J. C. Owrutsky. Nat. Comm., 7:1–10 (2016)
H. Hiura, A. Shalabney, and J. George. Chemrxiv.Org, (2018)
J. Feist, J. Galego, and F. J. Garcia-Vidal. ACS Photonics, 5(1):205–216 (2018)
R. F. R., L. A. Martı́nez-Martı́nez, M. Du, J. Campos-Gonzalez-Angulo, and J. Yuen-Zhou. Chem. Sci., 9(30):6325–6339 (2018)
M. Du, Raphael F. Ribeiro, and J. Yuen-Zhou. Chem, 5(5):1167–1181 (2019)
A. Thomas, L. Lethuillier-Karl, K. Nagarajan, R. M. A. Vergauwe, J. George, T. Chervy, A. Shalabney, E. Devaux, C. Genet, J. Moran, and T. W. Ebbesen. Science, 363(6427):615–619 (2019)
M. Hertzog, M. Wang, J. Mony, and K. Börjesson. Chem. Soc. Rev., 48(3):937–961 (2019)
J. George, T. Chervy, A. Shalabney, E. Devaux, H. Hiura, C. Genet, and Thomas W. Ebbesen. Phys. Rev. Lett., 117(15):153601 (2016)
Vacuum-enhanced optical nonlinearities with disordered molecular photoswitches
Litinskaya, Marina1, Herrera, Felipe2,3
Department of Physics & Astronomy and Department of Chemistry, University of British Columbia, Vancouver, Canada V6T 1Z1
Department of Physics, Universidad de Santiago de Chile, Avenida Ecuador 3493, Santiago, Chile
Millennium Institute for Research in Optics (MIRO), Concepción, Chile
It is well known that nonlinear optical signals such as cross-phase modulation can be coherently enhanced in multilevel atomic gases under conditions of electromagnetically induced transparency, but analogous results in solids are challenging to obtain due to natural energetic disorder. We propose a solid-state cavity QED scheme to enable cross-phase modulation between two arbitrarily weak classical fields in the optical domain, using a highly disordered intracavity medium composed of organic molecular photoswitches. Even in the presence of strong energetic and orientational disorder, the unique spectral properties of organic photoswitches can be used to enhance the desired nonlinearity under conditions of vacuum-induced transparency, enabling cross-phase modulation signals that surpass the detection limit imposed by absorption losses. Possible applications of the scheme include integrated all-optical switching with low photon numbers.
M. Litinskaya, F. Herrera, Phys. Rev. B 99, 041107(R), (2019)
Real-time solutions of coupled Ehrenfest-Maxwell-Pauli-Kohn-Sham equations: fundamentals, implementation, and nano-optical applications
René Jestädt1, Michael Ruggenthaler1, Micael J. T. Oliveira1, Angel Rubio1,2, and Heiko Appel1
Max Planck Institute for the Structure and Dynamics of Matter, Center for Free Electron Laser Science, 22761 Hamburg, Germany
Center for Computational Quantum Physics (CCQ), Flatiron Institute, 162 Fifth Avenue, New York NY 10010, USA
We introduce a Kohn-Sham construction which provides a computationally feasible approach for ab-initio light-matter interactions. In the mean-field limit for the effective nuclei the formalism reduces to coupled Ehrenfest-Maxwell-Pauli-Kohn-Sham equations.
We present an implementation of the approach in the real-space real-time code Octopus. For the implementation we use the Riemann-Silberstein formulation of classical electrodynamics and rewrite Maxwell's equations in Schrödinger form. This allows us to use existing time-evolution algorithms developed for quantum-mechanical systems also for Maxwell's equations. We introduce a predictor-corrector scheme and show how to couple the Riemann-Silberstein time-evolution of the electromagnetic fields self-consistently to the time-evolution of the electrons and nuclei. We introduce the concept of electromagnetic detectors, which allow to measure outgoing radiation in the far field and provide a direct way to record various spectroscopies.
We apply the approach to laser-induced plasmon excitation in a nanoplasmonic dimer system. We find that the self-consistent coupling of light and matter leads to significant local field effects which can not be captured with the conventional light-matter forward coupling. For our nanoplasmonic example we show that the self-consistent foward-backward coupling leads to changes in observables which are larger than the difference between local density and gradient corrected approximations for the exchange correlation functional. In addition, in our example we observe harmonic generation which appears only beyond the dipole approximation and can be directly observed in the outgoing electromagnetic waves on the simulation grid.
Overall, our approach is ideally suited for applications in nano-optics, nano-plasmonics, (photo) electrocatalysis, light-matter coupling in 2D materials, cases where laser pulses carry orbital angular momentum, or light-tailored chemical reactions in optical cavities to name but a few.
Figure: Plots of matter variables, the absolute value of the current density and the electron localization function (ELF). The most relevant Maxwell field variables, the electric field along the laser polarization direction z and the total Maxwell energy are presented in the lower panels. In the top of the figure, we show the incident laser pulse and at the center the geometry of the nanoplasmonic dimer.
René Jestädt, Michael Ruggenthaler, Micael J. T. Oliveira, Angel Rubio and Heiko Appel, arXiv:1812.05049 (2018)
Theory for the response of hybrid light-matter modes in the presence of vibrations
Kalle Kansanen, Aili Asikainen, Jussi Toppari, Gerrit Groenhof, Tero Heikkilä
We construct a model describing the response of a hybrid system where the electromagnetic field — in particular, surface plasmon polaritons — couples strongly with electronic excitations of atoms or molecules. Our approach is based on the input-output theory of quantum optics, and in particular it takes into account the thermal and quantum vibrations of the molecules. The latter is taken into account by the $P(E)$ theory analogous to that used for example in the theory of dynamical Coulomb blockade. As a result, we are able to include the effect of the molecular Stokes shift on the strongly coupled response of the system. Our model then accounts for the asymmetric emission from upper and lower polariton modes. It also allows for an accurate description of the partial decoherence of the light emission from the strongly coupled system. Our results can be readily used to connect the response of the hybrid modes to the emission and fluorescence properties of the individual molecules, and thus are relevant in understanding any utilization of such systems, like coherent light harvesting.
Figure: a) The measurement setup in which a surface plasmon polariton is excited on an interface where it can strongly couple to molecules. b) The plasmon response function without (left) and with vibrations (right) as a function of driving frequency $\omega_d$ (y-axis) and the detuning between plasmon frequency $\omega_c$ and molecule's transition frequency $\omega_m$. On left the Huang–Rhys parameter $S = 0$ and on right $S = 1$.
Polariton assisted down-conversion of photons
Juan B. Perez, Joel Yuen-Zhou
Department of Chemistry and Biochemistry, University of California San Diego, La Jolla, California 92093.
Quantum dynamics of photoisomerization of 3,3'-diethyl-2,2'-thiacynine iodide embedded in an optical microcavity was theoretically studied. The molecule was coupled to a single cavity mode by using the quantum Rabi Hamiltonian in the strong an ultrastrong coupling regime, and the Schrödinger equation was solved using the Multiconfigurational Time Dependent Hartree Method (MCTDH). We show that, for specific values of coupling strength and cavity frequency, cooperative effect of light-matter coupling and non-adiabatic coupling produces the mixing of polariton manifolds with different number of excitations. In particular, an initial electronic excitation in the "cis" configuration leads to the generation of two cavity photons in the "trans" configuration upon isomerization. Our finding suggests a new mechanism to achieve down-conversion using molecules in optical microcavities.
keywords: Polariton Chemistry, Non-adiabatic dynamics, Photon down-conversion
Figure: Scheme for two-photon generation upon photoisomerization of 3,3'-diethyl-2,2'-thiacynine iodide in an optical microcavity.
M. Kowalewski, K. Bennett, and S. Mukamel, J. Chem. Phys. 144, 054309 (2016)
R. Stassi, A. Ridolfo, O. Di Stefano, M. J. Hartmann, and S. Savasta, Phys. Rev. Lett. 110, 243601 (2013).
J. Feist, J. Galego, and F. Garcia-Vidal, ACS Photonics. 5, 205 (2018).
O. Vendrell, Chem. Phys. 509, 55 (2018).
Quantum Langevin approach to cavity QED with molecules
Michael Reitz, Christian Sommer, Claudiu Genes
We develop a quantum Langevin equations approach to describe the interaction between light and molecular systems modelled as quantum emitters coupled to a multitude of vibrational modes via a Holstein-type interaction. The formalism allows for analytical derivations of absorption and fluorescence profiles both in the transient and steady state regimes of molecules outside and inside optical cavities. We also derive expressions for the cavity-modified radiative emission branching ratio of a single molecule, cavity transmission in the strong coupling regime and Förster resonance energy transfer between donor-acceptor molecules.
Figure: Schematic representation of an optical cavity with molecules which is fed by input noises due to the interaction with the environment.
M. Reitz, C. Sommer and C. Genes, Phys. Rev. Lett. 122, 203602 (2019).
Chemical Reactivity of Molecules Strongly Coupled to an Electromagnetic Cavity Mode
K. Caicedo and R. Avriller
CNRS and University of Bordeaux, Bordeaux, France
Recent progresses in nanotechnology led to a new generation of nano-structures playing the role of electromagnetic cavities, like plasmonic structures [1], organic micro-cavities [2] and nano-fluidic Fabry-Pérot cavities [3]. The experimental proof of reaching the electronic strong-coupling regime for cavity-confined molecular populations was reported recently [2,3], with remarkable effects on the kinetics of chemical reactions [2]. In this work, we propose and investigate a relevant model of photochemical reaction for molecules confined inside a nano-fluidic Fabry-Pérot cavity. Using a recently developed form of Marcus theory [4,5,6] of charge transfer chemical reactions, we investigate the time-dependence of the reactant concentration as a function of time, as well as its alteration due to the formation of an hybrid-polariton mode in the cavity. Depending on the concentration of reactants and detuning of the cavity with respect to the molecular transition, we show the possibility to control the rate of the chemical reaction [6].
Figure: Scheme of a charge-transfer chemical reaction for molecules strongly coupled to the electromagnetic mode of a nanofluidic Fabry-Perot cavity.
R. Chikkaraddy et al., Nature 535, 127 (2016).
H. Bahsoun, et al., ACS Photonics 5, 225 (2018).
R.A. Marcus, Rev. Mod. Phys., 65(3), 599 (1993).
F. Herrera et al., Phys. Rev. Lett. 116, 238301 (2016).
R. Avriller, in preparation (2019).
Modification of excitation and charge transfer in cavity quantum-electrodynamical chemistry
Christian Schäfer, Michael Ruggenthaler, Heiko Appel, Angel Rubio
Max-Planck Institute for the Structure and Dynamics of Matter
Energy transfer in terms of excitation or charge is one of the most basic processes in nature and understanding and controlling them is one of the major challenges of modern quantum chemistry. In this work, we highlight [1] that these processes as well as other chemical properties can be drastically altered by modifying the vacuum fluctuations of the electromagnetic field in a cavity. By using a real-space formulation from first principles that keeps all the electronic degrees of freedom in the model explicit and simulates changes in the environment by an effective photon mode, we can easily connect to well-known quantum-chemical results such as Dexter charge- and Förster excitation-transfer reactions taking into account the often disregarded Coulomb and self-polarization interaction. We find that the photonic degrees of freedom introduce extra electron-electron correlations over large distances, that the coupling to the cavity can drastically alter the characteristic charge-transfer as well as the excitation energy transfer behavior. Our results highlight that changing the photonic environment can redefine chemical processes, rendering polaritonic chemistry a promising approach towards the control of chemical reactions.
Figure: Modification of charge transfer under strong photonic coupling
C. Schäfer, M. Ruggenthaler, H. Appel and A. Rubio, PNAS 116, 4883 (2019).
A theoretical approach to strong coupling of organic molecules with arbitrary photonic structures
Mónica Sánchez-Barquilla, Rui Silva, Johannes Feist
Strong coupling of a dense collection of organic molecules with the electromagnetic modes of nanophotonic and/or plasmonic devices holds great interest for a wide variety of applications. Our purpose is to theoretically study collective strong coupling of many organic molecules to modes in arbitrary cavity geometries, using a realistic description of the molecules and the modal structure of the photonic modes. Our first approach is the Maxwell-Bloch approximation, in which the EM field is classical while the molecule is treated as a two-level system.
One of the weaknesses of the Maxwell-Bloch description is that some quantum effects, such as spontaneous emission, cannot be described, so it is necessary to go beyond the Maxwell-Bloch and take into account higher-order correlations of the atom and EM field. To do so, we apply the cluster-expansion method [1], in which correlations up to the second order are taken into account. We compare this approach to exact quantum models and discuss its validity for different situations.
M. Kira and W. Koch, Semiconductor Quantum Optics (2012).
Light-induced non-adiabatic effects in molecular polariton systems
Johan F. Triana1,2, José Luis San Vicario3
Grupo de Física Atómica y Molecular, Instituto de Física, Universidad de Antioquia, Calle 62 Nº 52 – 59, Medellín, Colombia
Non-adiabatic effects and changes in the molecular structure induced via light-matter interaction with quantized electromagnetic fields is a growing research direction. In particular, light-induced crossings or light-induced conical intersection generated by the radiation in molecular systems represent a tunable non-adiabatic effective tool to manipulate the photodynamics. At present, it is not fully clear what is the role of the quantumness of the radiation-matter interaction (classical fields versus quantum fields) in the molecular photodynamics, how is the transition from strong to ultrastrong coupling regimes when non-adiabatic effects are dominant or where are the limits for the aplication of different minimal coupling Hamiltonians in ab initio molecular polaritonics. A better understanding of these issues becomes crucial before entering in the control of chemical reactions by classical-light or quantum-light; those non-adiabaticies due to light induced conical intersections (light induced avoided crossings) now add to those coming from permanent conical intersections (avoided crossings), providing a new richer scenario for the manipulation of chemical reactions via excited states and photon interactions.
In this work we present theoretical numerical calculations of a simple but judiciously chosen two-state molecular vibrating model to understand the differences in the photodynamics caused by light-induced non-adiabatic effects generated both by classical and quantum radiation. Also, the influence of the self-polarization or dipole self-energy term that appears in the Power-Zienau-Woolley transformation of the full minimal coupling [1] is analyzed; hence the range of applicability of the familiar Jaynes-Cummings Hamiltonian is discussed. Finally, we investigate the role of adding the rotational motion to the vibrating system to elucidate the feasibility of enhanced transitions when increasing the dimensionality of the molecular degrees of freedom: from light induced avoided crossings to the the newly generated light induced conical intersections. This work represents a benchmark study for future developments using realistic molecular systems in polariton chemistry.
C. Schäfer, M. Ruggenthaler, and A. Rubio, Physical Review A 98, 043801 (2018).
Conical Intersections Induced by Quantum Light: Field-Dressed Spectra from the Weak to the Ultrastrong Coupling Regimes
Tamas Szidarovszky1, Gábor J. Halász2, Attila G. Császár1, Lorenz S. Cederbaum3, Ágnes Vibók4,5
Laboratory of Molecular Structure and Dynamics, Institute of Chemistry, ELTE Eötvös Loránd University and MTA-ELTEComplex Chemical System Research Group, Budapest, Hungary
Department of Information Technology, University of Debrecen, Debrecen, Hungary
Theoretische Chemie, Physikalisch-Chemisches Institut, Universität Heidelberg, Heidelberg, Germany
Department of Theoretical Physics, University of Debrecen, Debrecen, Hungary
ELI-ALPS, Szeged, Hungary
In classical laser fields with frequencies resonant with the electronic excitation in molecules, it is by now known that conical intersections are induced by the field and are called light-induced conical intersections (LICIs) [1]. As optical cavities have become accessible, the question arises whether their quantized modes could also lead to the appearance of LICIs. A theoretical framework is formulated for the investigation of LICIs of diatomics in such quantum light. The eigenvalue spectrum of the dressed states in the cavity is studied, putting particular emphasis on the investigation of absorption spectra of the Na2 molecule, that is, on the transitions between dressed states, measured by employing a weak probe pulse. The dependence of the spectra on the light−matter coupling strength in the cavity and on the frequency of the cavity mode is studied in detail. The computations demonstrate strong nonadiabatic effects caused by the appearing LICI [2].
Figure: The three regimes of coupling strength and the related field-dressed PECs of a molecule interacting with a resonant cavity mode. The diabatic surfaces $V_1$ and $V_2$ are indicated with continuous lines, whereas the polariton surfaces $W_0$, $W_1$, and $W_2$ are indicated with dashed lines. $\hbar\omega_c$ is the energy of the cavity photon. The double headed arrows and the dashed red arrow represent resonant and nonresonant couplings in the cavity, respectively.
G. J. Halász, Á. Vibók and L. S. Cederbaum, J. Phys. Chem. Lett. 6, 348 (2015).
T. Szidarovszky, G. J. Halász, A. G. Császár, L. S. Cederbaum and Á. Vibók, J. Phys. Chem. Lett. 9, 6215 (2018).
Superradiance in Ultracold Photo-Chemistry
David Wellnitz, Stefan Schütz, Johannes Schachenmayer, Guido Pupillo
ISIS and IPCMS, University of Strasbourg and CNRS, Strasbourg, France
In quantum optics, atoms that are coupled to a cavity can exhibit a collective enhancement of decay rates called superradiance [1]. In the single molecule case, it has been shown that a cavity can be used to steer chemical reactions [2]. However, the precise role of collective effects remains open [2]. Here, we investigate the collective dynamics induced by the cavity. In order to analyze the dynamics, we consider toy models such as $\Lambda$-systems coupled to a cavity. By adiabatically eliminating the cavity and the excited states, we can derive an effective equation of motion for the ground states which depends on the number of $\Lambda$-systems.
N. E. Rehler and J. H. Eberly, Phys. Rev. A 3, 1735 (1971)
T. Kampschulte and J. H. Denschlag, New Journ. of Phys. 20, 123015 (2018) | CommonCrawl |
A nonlinear effective slip interface law for transport phenomena between a fracture flow and a porous medium
DCDS-S Home
Existence and decay of solutions of the 2D QG equation in the presence of an obstacle
October 2014, 7(5): 1045-1063. doi: 10.3934/dcdss.2014.7.1045
Stokes and Navier-Stokes equations with perfect slip on wedge type domains
Siegfried Maier 1, and Jürgen Saal 1,
Heinrich-Heine-Universität Düsseldorf, Mathematisches Institut, 40204 Düsseldorf, Germany, Germany
Received March 2013 Revised June 2013 Published May 2014
Well-posedness of the Stokes and Navier-Stokes equations subject to perfect slip boundary conditions on wedge type domains is studied. Applying the operator sum method we derive an $\mathcal{H}^\infty$-calculus for the Stokes operator in weighted $L^p_\gamma$ spaces (Kondrat'ev spaces) which yields maximal regularity for the linear Stokes system. This in turn implies mild well-posedness for the Navier-Stokes equations, locally-in-time for arbitrary and globally-in-time for small data in $L^p$.
Keywords: Kondrat'ev spaces, perfect slip, $\mathcal{H}^\infty$-calculus., Stokes equations, wedge domains.
Mathematics Subject Classification: Primary: 76D035, 35K65; Secondary: 76D0.
Citation: Siegfried Maier, Jürgen Saal. Stokes and Navier-Stokes equations with perfect slip on wedge type domains. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 1045-1063. doi: 10.3934/dcdss.2014.7.1045
W. Borchers and T. Miyakawa, $L^2$ decay for the Navier-Stokes flow in halfspaces,, Math. Ann., 282 (1988), 139. doi: 10.1007/BF01457017. Google Scholar
G. Da Prato and P. Grisvard, Sommes d'oprateurs linaires et quations diffrentielles oprationelles,, J. Math. Pures Appl., 54 (1975), 305. Google Scholar
R. Denk, M. Hieber and J. Prüss, $\mathcalR$-boundedness, Fourier multipliers and problems of elliptic and parabolic type,, Mem. Am. Math. Soc., 166 (2003). doi: 10.1090/memo/0788. Google Scholar
R. Denk and M. Geißert, J. Saal and O. Sawada, The spin-coating process: Analysis of the free boundary value problem,, Commun. Partial Differ. Equations, 36 (2011), 1145. doi: 10.1080/03605302.2010.546469. Google Scholar
G. Dore and A. Venni, On the closedness of the sum of two operators,, Math. Z., 196 (1987), 189. doi: 10.1007/BF01163654. Google Scholar
A. Friedman, Partial Differential Equations,, Holt, (1969). Google Scholar
A. Friedman and J. L. Velázquez, Time-dependent coating flows in a strip. I: The linearized problem,, Trans. Am. Math. Soc., 349 (1997), 2981. doi: 10.1090/S0002-9947-97-01956-9. Google Scholar
G. P. Galdi, An Introduction to the Mathematical Theory of the Navier-Stokes Equations. Steady-State Problems,, Springer Monographs in Mathematics, (2011). doi: 10.1007/978-0-387-09620-9. Google Scholar
Y. Giga, Solutions for semilinear parabolic equations in $L_p$ and regularity of weak solutions of the Navier-Stokes system,, Journal of Differential Equations, 62 (1986), 186. doi: 10.1016/0022-0396(86)90096-3. Google Scholar
M. Haase, The Functional Calculus for Sectorial Operators,, Operator Theory: Advances and Applications, (2006). doi: 10.1007/3-7643-7698-8. Google Scholar
P. W. Jones, Quasiconformal mappings and extendability of functions in Sobolev Spaces,, Acta Math., 147 (1981), 71. doi: 10.1007/BF02392869. Google Scholar
N. Kalton and L. Weis, The $H^\infty$-calculus and sums of closed operators,, Math. Ann., 321 (2001), 319. doi: 10.1007/s002080100231. Google Scholar
P. Kunstmann and L. Weis, Maximal $L_p$-regularity for parabolic equations, Fourier multiplier theorems and $H^\infty$-functional calculus,, in Functional analytic methods for evolution equations, (1855), 65. doi: 10.1007/978-3-540-44653-8_2. Google Scholar
R. Labbas and B. Terreni, Somme d'opérateurs linéaires de type parabolique,, Boll. Un. Mat. Ital., 7 (1987), 545. Google Scholar
V. N. Maslennikova and M. E. Bogovski, Elliptic boundary value problems in unbounded domains with noncompact and nonsmooth boundaries,, Rendiconti del Seminario Matematico e Fisico di Milano, 56 (1986), 125. doi: 10.1007/BF02925141. Google Scholar
M. Mitrea and S. Monniaux, On the analyticity of the semigroup generated by the Stokes operator with Neumann-type boundary conditions on Lipschitz subdomains of Riemannian manifolds,, Transactions of the American Mathematical Society, 361 (2009), 3125. doi: 10.1090/S0002-9947-08-04827-7. Google Scholar
M. Mitrea and S. Monniaux, The nonlinear Hodge-Navier-Stokes equations in Lipschitz domains,, Differential and Integral Equations, 22 (2009), 339. Google Scholar
T. Nau and J. Saal, H-infinity-calculus for cylindrical boundary value problems,, Advances in Differential Equations, 17 (2012), 767. Google Scholar
A. I. Nazarov, $L_p$-estimates for a solution to the Dirichlet problem and to the Neumann problem for the heat equation in a wedge with edge of arbitrary codimension,, J. Math. Sci., 106 (2001), 2989. doi: 10.1023/A:1011319521775. Google Scholar
A. Noll and J. Saal, $H^\infty$-calculus for the Stokes operator on Lq-spaces,, Math. Z., 244 (2003), 651. Google Scholar
J. Prüss, Evolutionary Integral Equations and Applications,, Monographs in Mathematics, (1993). doi: 10.1007/978-3-0348-8570-6. Google Scholar
J. Prüss and S. Shimizu and Y. Shibata and G. Simonett, On well-posedness of incompressible two-phase flows with phase transitions: The case of equal densities,, Evolution Equations and Control Theory, 1 (2012), 171. doi: 10.3934/eect.2012.1.171. Google Scholar
J. Prüss and G. Simonett, $H^{\infty}$-calculus for the sum of non-commuting operators,, Trans. Amer. Math. Soc., 359 (2007), 3549. doi: 10.1090/S0002-9947-07-04291-2. Google Scholar
J. Saal, Robin Boundary Conditions and Bounded $H^\infty$-Calculus for the Stokes Operator,, Logos-Verlag, (2003). Google Scholar
J. Saal, Stokes and Navier-Stokes equations with Robin boundary conditions in a half-space,, J. Math. Fluid Mech., 8 (2006), 211. doi: 10.1007/s00021-004-0143-5. Google Scholar
B. Schweizer, A well-posed model for dynamic contact angles,, Nonlinear Anal. Theory Methods Appl., 43 (2001), 109. doi: 10.1016/S0362-546X(99)00183-2. Google Scholar
V. A. Solonnikov, On some free boundary problems for the Navier-Stokes equations with moving contact points and lines,, Math. Ann., 302 (1995), 743. doi: 10.1007/BF01444515. Google Scholar
Xin-Guang Yang, Rong-Nian Wang, Xingjie Yan, Alain Miranville. Dynamics of the 2D Navier-Stokes equations with sublinear operators in Lipschitz-like domains. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020408
Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440
Xuhui Peng, Rangrang Zhang. Approximations of stochastic 3D tamed Navier-Stokes equations. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5337-5365. doi: 10.3934/cpaa.2020241
Do Lan. Regularity and stability analysis for semilinear generalized Rayleigh-Stokes equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021002
Zhiting Ma. Navier-Stokes limit of globally hyperbolic moment equations. Kinetic & Related Models, 2021, 14 (1) : 175-197. doi: 10.3934/krm.2021001
Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242
Sergio Conti, Georg Dolzmann. Optimal laminates in single-slip elastoplasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 1-16. doi: 10.3934/dcdss.2020302
Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115
Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348
Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020110
Cung The Anh, Dang Thi Phuong Thanh, Nguyen Duong Toan. Uniform attractors of 3D Navier-Stokes-Voigt equations with memory and singularly oscillating external forces. Evolution Equations & Control Theory, 2021, 10 (1) : 1-23. doi: 10.3934/eect.2020039
Andrea Giorgini, Roger Temam, Xuan-Truong Vu. The Navier-Stokes-Cahn-Hilliard equations for mildly compressible binary fluid mixtures. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 337-366. doi: 10.3934/dcdsb.2020141
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Guo-Niu Han, Huan Xiong. Skew doubled shifted plane partitions: Calculus and asymptotics. Electronic Research Archive, 2021, 29 (1) : 1841-1857. doi: 10.3934/era.2020094
Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163
Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032
Baoli Yin, Yang Liu, Hong Li, Zhimin Zhang. Approximation methods for the distributed order calculus using the convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1447-1468. doi: 10.3934/dcdsb.2020168
Duy Phan. Approximate controllability for Navier–Stokes equations in $ \rm3D $ cylinders under Lions boundary conditions by an explicit saturating set. Evolution Equations & Control Theory, 2021, 10 (1) : 199-227. doi: 10.3934/eect.2020062
Lingwei Ma, Zhenqiu Zhang. Monotonicity for fractional Laplacian systems in unbounded Lipschitz domains. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 537-552. doi: 10.3934/dcds.2020268
Lucio Damascelli, Filomena Pacella. Sectional symmetry of solutions of elliptic systems in cylindrical domains. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3305-3325. doi: 10.3934/dcds.2020045
Siegfried Maier Jürgen Saal | CommonCrawl |
\begin{document}
\title{Low dilatation pseudo-Anosovs on punctured surfaces and volume.} \begin{abstract} For a pseudo-Anosov homeomorphism $f$ on a closed surface of genus $g\geq 2$, for which the entropy is on the order $\frac{1}{g}$ (the lowest possible order), Farb-Leininger-Margalit showed that the volume of the mapping torus is bounded, independent of $g$. We show that the analogous result fails for a surface of fixed genus $g$ with $n$ punctures, by constructing pseudo-Anosov homeomorphism with entropy of the minimal order $\frac{\log n}{n}$, and volume tending to infinity. \end{abstract} \section{Introduction}
Let $l(g,n) = \min\{\log(\lambda(f)) | f : S_{g,n} \to S_{g,n}\}$ denote the logarithm of the minimal dilatation of a pseudo-Anosov $f$ on an orientable surface $S_{g,n}$ with genus $g$ and $n$ punctures, that is, the minimal topological entropy. When $n=0$, Penner showed that \[\frac{\log2}{12g-12}<l_{g,0}<\frac{\log 11}{g}.\] See \cite{penner}. These bounds have been improved since Penner's original work \cite{bound1,bound2,bound3,bound4,bound5,bound6}
To better understand where minimal dilatation pseudo-Anosov homeomorphism come from, in \cite{flm}, the authors consider the set \[
\Psi_L=\{f:S_{g,0}\to S_{g,0} | f \text{ is pseudo-Anosov, } \log(\lambda(f)) \leq \frac{L}{g}\}. \] They show that for any $L>0$ there exists finite number of hyperbolic 3-manifolds $M_1, \dots, M_n$, such that for each $f\in \Psi_L$, the mapping torus $M_f$ of $f$ is obtained by Dehn fillings on some $M_i$. See \cite[Corollary 1.4]{flm}. As a consequence, the volume of $M_f$ is bounded by a constant depending only on $L$; see \cite[Corollary 1.5]{flm}. See also \cite{agol2,kojima,brock}.
For punctured surfaces of a fixed genus, Tsai \cite{tsai} proved that $l_{g,n}$ has a different asymptotic behavior. \begin{thm}[Tsai] For any fixed $g\geq2$, for all $n\geq3$, there is a constant $c_g\geq1$ depending on $g$ such that \[ \frac{\log n}{c_gn}< l_{g,n} < \frac{c_g\log{n}}{n}.\] \end{thm} See also \cite{yazdi,yazdi2,valdivia,bound5}. For fixed $g\geq 2, n\geq 0$, let \[
\Psi_{g,L}=\{f:S_{g,n}\to S_{g,n} | f \text{ is pseudo-Anosov, } \log(\lambda(f)) \leq L\frac{\log{n}}{n}\}. \] We show that the analogue of the results of \cite{flm} fail for $\Psi_{g,L}$. Specifically, we prove the following. \begin{mthm*} For any fixed $g\geq 2$, and $L\geq 162g$, there exists a sequence $\{M_{f_i}\}_{i=1}^{\infty}$, with $f_i\in \Psi_{g,L}$, so that $\displaystyle {\lim_{n \to \infty} \mathrm{Vol}(M_{f_i})\rightarrow \infty}$. \end{mthm*} As a consequence, we have the following. \begin{col} For any $g\geq 2$, there exists $L$ such that there is no finite set $\Omega$ of 3-manifolds so that for all $M_f$, $f\in \Psi_{g,L}$ are obtained by Dehn filling on some $M \in \Omega$. \end{col} The construction in the proof of the Main Theorem is based on the example in \cite{tsai} of $f_{g,n}:S_{g,n}\to S_{g,n}$ with \[ \log(\lambda(f_{g,n})) < \frac{c_g\log{n}}{n}. \] But for each $g$, one can show that $\{M_{f_{g,n}}\}_{n=1}^\infty$ are all obtained by Dehn fillings on a finite number of 3-manifolds, so we have to modify this construction. See also examples constructed by Kin-Takasawa \cite{bound5}. The idea is to compose $f_{g,n}$ with homeomorphisms supported in a subsurface of $S_{g,n}$ that become more and more complicated as $n$ gets larger. This has to be balanced with keeping the stretch factor bounded by a fixed multiple of $\frac{\log n }{n}$.
In Section 2 we recall some of the background we will need on fibered 3-manifold, hyperbolic geometry and Dehn surgery. In Section 3 we state Theorem \ref{main}, which is a version of the Main Theorem for punctured spheres based on a construction of \cite{hironaka}, then prove the Main Theorem based on that. In Section 4 we give the complete proof of Theorem \ref{main} by giving the construction of the sequence $\{M_{f_i}\}_{i=1}^{\infty}$, which are obtained by cutting open and gluing in an increasing numbers of copies of a certain manifold with totally geodesic boundary, then applying Dehn fillings.
Based on the Main Theorem, we have the following question. If we only consider the minimizers of the entropy, can we still find a sequence with unbounded volume?
\section{Background} \subsection{Fibered 3-manifolds} Let $S$ be a closed surface minus a finite number of points. We sometimes consider $S$ as a compact surface with boundary components, and will confuse punctures with boundary components when convenient (the former obtained from the latter by removing the boundary). The following theorem is from \cite{thurston}.
\begin{thm}[Thurston] Any diffeomorphism $f$ on $S$ is isotopic to a map $f\textprime$ satisfying one of the following conditions: \begin{enumerate} \item[(i)]$f\textprime$ has finite order. \item[(ii)]$f\textprime$ preserves a disjoint union of essential simple curves. \item[(iii)]There exists $\lambda>1$ and two transverse measured foliations $\mathcal{F}^s$ and $\mathcal{F}^u$, called the stable and unstable foliations, respectively, such that \[f\textprime(\mathcal{F}^s)=(1/\lambda)\mathcal{F}^s, f\textprime(\mathcal{F}^u)=\lambda\mathcal{F}^u.\] \end{enumerate} \end{thm} The three cases are called {\em periodic}, {\em reducible} and {\em pseudo-Anosov} respectively. The number $\lambda=\lambda(f)$ in case (iii) is called the {\em stretch factor} of $f$. The topological entropy of pseudo-Anosov homeomorphism $f:S\to S$ is $\log (\lambda(f))$
Let $M$ be the interior of a compact, connected, orientable, irreducible, atoroidal 3-manifold that fibers over $S^1$ with fiber $S\subset M$ and monodromy $f$. That is, $M$ is the mapping torus of $f$: \[M=M_f=S\times[0,1] / (x,1)\sim(f(x),0).\] Then $S$ is a closed orientable surface with a finite number of punctures and negative Euler characteristic, and $f$ is pseudo-Anosov with a unique expanding invariant foliation up to isotopy. Associated to $(M,S)$ we also have \begin{enumerate} \item[(i)]$F\subset H^1(M,\mathbb{R})$, the open face of the unit ball in Thurston norm with $[S]\in (F\cdot \mathbb{R}^+)$. See \cite{thurstonnorm}. \item[(ii)]A suspension flow $\psi$ on $M$, and a 2-dimensional foliation obtained by suspending the stable and unstable foliation of $f$. See \cite{fried1}. \end{enumerate} $F$ is called a {\em fibred face} of the Thurston norm ball. The segments \[{x}\times [0,1] \subset S \times [0,1]\] glued together in $M_f$ are leaves of the 1-dimensional foliation $\Psi$ of M, the flow lines of $\psi$.
The following theorem is from \cite{fried1} and \cite{fried2}. \begin{thm}[Fried]\label{fried} Let $(M,S)$, $F$ and $\Psi$ be as above. Then any integral class in $F\cdot \mathbb{R}^+$ is represented by a fiber $S\textprime$ of a fibration of $M$ over the circle which can be isotoped to be transverse to $\Psi$, and the first return map of $\psi$ coincides with the pseudo-Anosov monodromy $f\textprime$, up to isotopy. Moreover, if $S\textprime \subset M$ is any orientable surface with $S\textprime \pitchfork \Psi$, then $[S\textprime]\in \overline{F\cdot \mathbb{R}^+}$. \end{thm}
If $f: S\to S$ is pseudo-Anosov on a surface with punctures, and $G\subset S$ is a spine, then we can homotope $f$ to a map $g: S\to G$ so that $g|_G:G\to G$ a graph map; that is, $g$ sends vertices to vertices and edges to edge paths. The growth rate of $g|_G$ is the largest absolute value of any eigenvalue of the Perron-Frobenious block of the transition matrix $T$ induced by $g$, and is an upper bound for $\lambda(f)$, see \cite{bestvina}.
The Perron-Frobenius Theorem tells that largest eigenvalue of a Perron-Frobenius matrix is bounded above by the largest row sum of the matrix. Recall that associated to a non-negative integral matrix $T=\{e_{ij}\}, 1\leq i,j \leq n$ is a directed graph $\Gamma$, where $\{V_1, V_2, \dots, V_n\}$ is the vertex set of $\Gamma$ corresponding to the columns/rows of $T$, and $e_{ij}$ represents the number of edges pointing from $V_i$ to $V_j$. We have the following proposition. See \cite{gantmacher}.
\begin{prop}\label{pf} Let $\Gamma$ be the directed graph of an integral Perron-Frobenius matrix $T$ with eigenvalue $\lambda$. Let $N(V_i,l)$ be the number of length-$l$ paths emanating from vertex $V_i$ in $\Gamma$. Then $\lambda \leq \max_i{{N(V_i,l)}}$. \end{prop}
\subsection{Hyperbolic geometry}
\begin{figure}
\caption{Left: $\Sigma_4$. Middle: $A_0$. Right: $A$.}
\label{AA}
\end{figure}
\begin{figure}
\caption{Left: $\Sigma_4$. Middle: $A_0$. Right: $A$.}
\label{AA}
\end{figure}
The following construction is given by Agol in \cite{agol}. Let $\Sigma_4$ denote the 4-puntured sphere, and let $\delta_0, \delta_1 \subset \Sigma_4$ be two circles on $\Sigma_4$ shown in Figure 1. Let $A_0$ be $\Sigma_4\times [0,1]\backslash (\delta_0 \times \{0\} \cup \delta_1 \times \{1\})$. Let $V_8$ denote the volume of a regular, ideal, hyperbolic octahedron.
\begin{prop}[Agol]\label{agol} $A_0$ has complete hyperbolic metric with totally geodesic boundary, with $\mathrm{Vol}(A_0)=2V_8$. \end{prop} For our purpose, it is more useful to draw the 4-punctured sphere as a 3-punctured disk, then $A$ and $A_0$ are manifolds shown in Figure 2. Let $A$ denote the manifold obtained by isometrically gluing two copies of $A_0$ along $\Sigma_4\times \{0\}\backslash (\delta_0 \times \{0\})$, then we have \[A\cong\Sigma_4\times [0,1]\backslash (\delta_1 \times \{0,1\} \cup \delta_0 \times \{1/2\})\] and $A$ is a hyperbolic 3-manifold with totally geodesic boundary and \[\mathrm{Vol}(A)=4V_8\].
We will also need the following theorem, due to Adams \cite{adams}. \begin{thm}[Adams]\label{adams} Any properly embedded incompressible thrice-punctured sphere in a hyperbolic 3-manifold $M$ is isotopic to a totally geodesic properly embedded thrice-punctured sphere in $M$. \end{thm} From this theorem one easily obtains the following. \begin{col} A disjoint union of pairwise non-isotopic properly embedded thrice-punctured spheres in a hyperbolic 3-manifold $M$ are simultaneously isotopic to pairwise disjoint totally geodesic thrice-punctured spheres in $M$. \end{col}
\subsection{Dehn surgery}
Let $M$ be a compact 3-manifold with boundary $\partial M=\partial_1M\sqcup \dots \sqcup \partial_kM$ so that the interior of $M$ is a complete hyperbolic manifold, where $\partial_iM$ is a torus for any $1\leq i\leq k$. Choose a basis $\mu_i,\nu_i$ for $H_1(\partial_i M)=\pi_1(\partial_i M)$. Then the isotopy class of any essential simple closed curve $\beta_i$ on $\partial_iM$, called a {\em slope}, is represented by $p_i\mu_i+q_i\nu_i$ in $H_1(\partial_iM)$ for coprime integer $p_i,q_i$. Since we do not care about orientation of $\beta_i$, we use the notation $\beta_i=\frac{p_i}{q_i}\in\mathbb{Q}\cup\{\infty\}$. Given $\beta=(\beta_1,\dots, \beta_k)$, where each $\beta_i$ is a slope, let $M_\beta$ denote the manifold obtained by gluing a solid torus to each $\partial_iM$, where $\beta_i$ is the slope $\partial_iM$ identified with the meridian of the corresponding solid torus. We call $\beta=\{\beta_1,\dots,\beta_k\}$ the Dehn surgery coefficients.
The following is from \cite{thurstonnote,dehnsurgery}. \begin{thm}[Thurston]\label{thurston} If the interior of $M$ is a complete hyperbolic 3-manifold of finite volume and $\beta=\{\beta_1,\dots,\beta_k\}$ are the Dehn surgery coefficients, then for all but finitely many slopes $\beta_i$, for each $i$, $M_\beta$ is hyperbolic and $\mathrm{Vol}(M_\beta)<\mathrm{Vol}(M)$. If $\beta^n=(\beta^n_1,\dots, \beta^n_k)$, with $\{\beta_i^n\}_{n=1}^{\infty}$ an infinite sequence of distinct slopes on $\partial_iM$ for each $1\leq i\leq n$, then $\displaystyle{\lim_{n \to \infty} \mathrm{Vol}(M_{\beta^n})\rightarrow \mathrm{Vol}(M)}$. \end{thm}
Let $||[M]||$ denote the {\em Gromov norm} of the fundamental class $[M]\in H_3(M;\partial M)$. Then we have the following two theorems. See \cite[Theorem 6.2, Proposition 6.5.2, Lemma 6.5.4]{thurstonnote} \begin{thm}[Gromov]\label{gromov} If the interior of $M$ admits a complete hyperbolic metric of finite volume, then \[
||[M]||=\frac{\mathrm{Vol}(M)}{v_3}. \] \end{thm} \begin{thm}[Thurston]\label{thurston1} For any Dehn fillings with Dehn surgery coefficients $\beta=\{\beta_1,\dots,\beta_k\}$, \[
||[M_\beta]||\leq||[M]||. \] \end{thm} We will be interested in a special case of Dehn surgery in which $M$ is obtained from a mapping torus \[M_f=S\times [0,1]/(x,1)\sim(f(x),0)\] by removing neighborhoods of disjoint curves $\alpha_1, \alpha_2, \dots, \alpha_k$, $\alpha_i \subset S\times \{t_i\}$, for some \[0<t_1<t_2<\dots<t_k<1.\] Then we can choose basis $\mu_i,\nu_i$ of $H_1(\partial_i M)$, so that if $\beta_i=\frac{1}{r_i}$, then \[M_{\beta}=M_{T_{\alpha_k}^{r_k}T_{\alpha_{k-1}}^{r_{k-1}} \dots T_{\alpha_1}^{r_1}f}.\] See, for example, \cite{stallings}.
\section{Reduction} Consider the sphere with $n+m+2$ puntures $S_{0,n+m+2}$. We can distribute the punctures as shown in Figure 3. Let $x$, $y$ and $z$ be the three of the punctures as shown. Let $X,Y\subset S_{0,n+m+2}$ be two embedded punctured disks centered at $x$ and $y$ as shown in Figure 3. There are $n$ punctures in $X$ arranged around $x$, $m$ punctures in $Y$ arranged around $y$, with one puncture shared in $X$ and $Y$. Let $p_n$ denote the homeomorphism which is supported inside $X$, fixes $x$ and rotates the punctures around $x$ by one counterclockwise. Let $q_m$ denote the homeomorphism which is supported inside $Y$, fixes $y$ and rotates the punctures around $y$ by one clockwise. For any $n,m>6$, let $f_{n,m}:S_{0,n+m+2} \rightarrow S_{0,n+m+2}$ be $f_{n,m}=q_mp_n$. These homeomorphisms $f_{n,m}$ were constructed by Hironaka and Kin in \cite{hironaka} and were shown to be pseudo-Anosov.
Let $V_1, V_2, \dots, V_n$ be the punctures in $X$, starting with $V_1$ in $X\cap Y$, ordered counter-clockwise, as shown in Figure 3. Let $\Sigma_0 \subset S_{0,n+m+2}$ be the subsurface containing 3 consecutive punctures $\{V_i, V_{i+1}, V_{i+2}\}$, with $\partial\Sigma_0=\beta$ as shown in Figure 3. Let $\alpha,\gamma\subset\Sigma_0$ be the two essential closed curves shown.
We will consider the composition $hf^3_{n,m}$, where $h: S_{0,n+m+2}\to S_{0,n+m+2}$ is a homeomorphism supported in $\Sigma_0$. Note that if we replace $h$ by $p_n^khp^{-k}_n$ for $1\leq k \leq n-(i+3)$, which is supported on $p^k_n(\Sigma_0)$, then $q_m$ commutes with $p_n^jhp_n^{-j}$ for $1\leq j \leq k$. So we have
\begin{equation}\notag \begin{split} f^k_{n,m}hf^3_{n,m}f^{-k}_{n,m}
& = f^{k-1}_{n,m}q_m(p_nhp_n^{-1})p_nf^{-k+3}_{n,m} \\
& = f^{k-1}_{n,m}(p_nhp_n^{-1})q_mp_nf^{-k+3}_{n,m} \\
& = f^{k-1}_{n,m}(p_nhp_n^{-1})f^{-k+4}_{n,m} \\
& = f^{k-2}_{n,m}q_m(p^2_nhp_n^{-2})p_nf^{-k+4}_{n,m} \\
& = \dots \\
& = q_m(p_n^{k}hp_n^{-k})p_nf^{2}_{n,m} \\
& =(p_n^{k}hp_n^{-k})f^{3}_{n,m} \\ \end{split} \end{equation}
That is, $hf^3_{n,m}$ is conjugate to $p_n^khp^{-k}_nf^3_{n,m}$. In particular, we can assume $\Sigma_0$ surrounds $V_i,V_{i+1},V_{i+2}$ for any $2\leq i\leq n-5$ at the expense of conjugation which does not affect stretch factor or the homeomorphism type of mapping torus. For this reason, in the following statements, $\Sigma_0$ is allowed to surround the punctures $V_i,V_{i+1},V_{i+2}$ for any $2\leq i\leq n-5$.
\begin{figure}
\caption{$S_{0,n+m+2}$ for $n=m=12$}
\end{figure}
\begin{thm}\label{main} For any $k=1,2,3,\dots$, there exists $B_k$ such that if \[h_k=T_{\alpha}^{u_1}T_{\gamma}^{v_1}\dots T_{\alpha}^{u_{k-1}}T_{\gamma}^{v_{k-1}}T_{\alpha}^{u_k}T_{\beta}^{v_k}\] where $u_i,v_i\geq B_k$ for all $i$, then for $h_kf_{n,m}: S_{0,n+m+2}\to S_{0,n+m+2}$, we have \begin{enumerate} \item[(1)] $h_kf^3_{n,m}$ is pseudo-Anosov. \item[(2)] $\mathrm{Vol}(M_{h_kf^3_{n,m}})\geq3kV_8$. \item[(3)] there exists $N=N_k$, such that if $n=m>N$, then \[ \log\lambda (h_kf^3_{n,n})\leq 54\frac{\log(2n+2)}{2n+2}. \] \end{enumerate}
\end{thm} Assuming this theorem, we prove the Main Theorem from the introduction.
\begin{mthm*} For any fixed $g\geq 2$, and $L\geq 162g$, there exists a sequence $\{M_{f_i}\}_{i=1}^{\infty}$, with $f_i\in \Psi_{g,L}$, so that $\displaystyle {\lim_{n \to \infty} \mathrm{Vol}(M_{f_i})\rightarrow \infty}$. \end{mthm*} \begin{proof} For any $g\geq 2$, \cite{tsai} gives a construction of an appropriate cover $\pi: S_{g,s}\rightarrow S_{0,n+m+2}$ such that $s=(2g+1)(n+m+1)+1$ and \[f_{n,m}: S_{0,n+m+2}\rightarrow S_{0,n+m+2},\] lifts to $S_{g,s}$. Moreover, it is clear from her construction that each of $\alpha, \beta, \gamma$ lift, so $h_k$ lifts.
Let $\widetilde{f_k}: S_{g,s}\rightarrow S_{g,s}$ be the lift of $h_k\circ f^3_{n,m}$. Then $\log(\lambda(\widetilde{f_k}))=\log(\lambda(h_kf^3_{n,m}))$. Also by Theorem \ref{main}, for $n=m>N_k$ and $n=m$ large enough, \[
\log(\lambda(\widetilde{f_k})) \leq 54\frac{\log(n+m+2)}{n+m+2}<54\frac{\log(s)}{\frac{s-1}{2g+1}+1}<162g\frac{\log s}{s}. \] Furthermore, $\mathrm{Vol}(M_{\widetilde{f_k}})=deg(\pi)\mathrm{Vol}(M) \geq 3kV_8deg(\pi)$. Therefore, $\{M_{\widetilde{f_k}}\}^{\infty}_{k=1}$ is contained in the set for the theorem and $\mathrm{Vol}(M_{\widetilde{f_k}})\rightarrow \infty$.
\end{proof} \begin{col*} For any $g\geq 2$, there exists $L$ such that there is no finite set $\Omega$ of 3-manifolds so that all $M_f$, $f\in \Psi_{g,L}$, are obtained by Dehn filling on some $M \in \Omega$. \end{col*} \begin{proof} Let $L\geq 162g$. If the finite set $\Omega$ exist, then by Theorems \ref{gromov} and \ref{thurston1}, \[
\displaystyle{\mathrm{Vol}(M_f)\leq v_3\max_{M\in \Omega}\{||[M]||\}}<\infty, \] which contradicts the Main Theorem. \end{proof}
\section{Proof of Theorem \ref{main}} Now fix some $n,m>6$, let $f=f^3_{n,m}$. Let $M_f$ be the mapping torus.
The proof of the following lemma is almost identical to the proof of \cite[Theorem B]{long}. \begin{lem} $M_f\backslash ((\alpha \cup \beta) \times \{1/2\})$ is hyperbolic. \end{lem}
\begin{proof} Let \[ \Sigma=S_{0,n+m+2}\times \{1/2\}, \Sigma\textprime=\Sigma\backslash ((\alpha \cup \beta) \times \{1/2\})\subset M_f. \] Let $T_0\subset M_f$ be an embedded incompressible torus. By applying some isotopy, we can make every component of $T_0\backslash \Sigma\textprime$ be an annulus. Any annulus component should either miss no fiber or have boundary components parallel to $\alpha$ or $\beta$, and on opposite sides of some small neighborhood of $\alpha$ or $\beta$. Since $\alpha$ and $\beta$ bound different number of punctures, a component parallel to $\alpha$ can never connect to a component parallel to $\beta$. Also, $f^{k_1}(\alpha)$ will never close up with $f^{k_2}(\alpha)$ if $k_1\neq k_2$ since $f$ is pseudo-Anosov. By Thurston's hyperbolization theorem (see \cite{thurston1,morgan,otal}), $M_f\backslash ((\alpha \cup \beta) \times \{1/2\})$ is hyperbolic. \end{proof}
For any $k$, let $L_k \subset M_f $ be \[ L_k=\alpha \times \left\{\frac{2}{4k}, \frac{4}{4k}, \dots, \frac{2k+2}{4k}\right\} \cup \gamma \times \left\{\frac{3}{4k}, \frac{5}{4k}, \dots, \frac{2k+1}{4k}\right\} \cup \beta \times \left\{\frac{1}{4k}\right\}. \] Let $N(L_k)$ denote an tubular neighborhood of $L_k$ and $M_k=M_f\backslash N(L_k)$. We can order the boundary components of $M_k$ as \[\partial M_k=\partial_1M_k\sqcup \dots \sqcup \partial_{2k+2}M_k ,\] where \[ \begin{cases}
\partial_{2i}M_k=\alpha \times \{ \frac{2i}{4k}\} & \text{for any } 1\leq i \leq k+1 \\
\partial_{2i+1}M_k=\gamma \times \{ \frac{2i+1}{4k}\} & \text{for any } 1\leq i \leq k-1\\
\partial_1M_k=\beta \times \{ \frac{1}{4k}\}. &
\end{cases} \]
\begin{lem} The interior of $M_f\backslash N(L_k)$ is hyperbolic and \[\mathrm{Vol}(int (M_f\backslash N(L_k)))\geq 4kV_8.\] \end{lem} \begin{proof} Glue $k$ copies of $A$, top to bottom, to get
\[A_k\cong (S_{0,4}\times [0,1])\backslash \left(\alpha \times \left\{\frac{0}{2k}, \frac{2}{2k}, \dots, \frac{2k}{2k}\right\} \cup \gamma \times \left\{\frac{1}{2k}, \frac{3}{2k}, \dots, \frac{2k-1}{2k}\right\}\right),\] with the $i$-th copy identifying with
\[ \left(S_{0,4}\times \left[\frac{2i-2}{2k},\frac{2i}{2k}\right]\right)\backslash\left(\alpha \times \left\{\frac{2i-2}{2k}, \frac{2i}{2k}\right\}\cup \gamma \times\left\{\frac{2i-1}{2k}\right\}\right). \] By Theorem \ref{adams}, $A_k$ has four totally geodesic thrice-punctured sphere boundary components, and $\mathrm{Vol}(A_k)=4kV_8$.
Cut $M_f\backslash ((\alpha \cup \beta) \times \{1/2\})$ along the two thrice-punctured spheres, i.e. the two regions shown in Figure 4. The two thrice-punctured spheres can be assumed to be totally geodesic by Corollary 2. So the cut-open manifold has four totally geodesic thrice-punctured sphere boundary components. Now glue the top boundary of $A_k$ to the top of the cut by an isometry, with the marked curves and colored faces glued correspondingly. Then apply the same to the bottom boundary. After applying an isotopy to adjust the height, we see that the result is homeomorphic to $M_f\backslash N(L_k)$. Moreover, $A_k$ is isometrically embedded in $M_f\backslash N(L_k)$. Since $\mathrm{Vol}(A_k)\geq 4kV_8$, we have $\mathrm{Vol}(M_f\backslash N(L_k))\geq 4kV_8$.
\begin{figure}
\caption{Cut and glue $A_k$ to $M_f\backslash ((\alpha \cup \beta) \times \{1/2\})$ when $k=3$}
\end{figure}
\end{proof}
\begin{prop} Given $k$, there exists $B_k$, such that if $u_i,v_i>B_k$, then $h_kf$ is pseudo-Anosov and $\mathrm{Vol}(M_{h_kf})\geq 3kV_8$. \end{prop} \begin{proof} Let $M=M_f\backslash N(L_k)$. Let $\beta=\{\frac{1}{v_k}, \frac{1}{u_k},\dots, \frac{1}{v_1},\frac{1}{u_1} \}$, then by Theorem \ref{thurston}, $M_{h_kf}=M_\beta$, and when $u_i,v_i$ are big enough, the volume is approximatly equal to $\mathrm{Vol}(M_f\backslash N(L_k))$. In particular, if $u_i,v_i$ are large enough, \[\mathrm{Vol}(int(M_{h_kf}))\geq \mathrm{Vol}(int(M_f\backslash N(L_k)))-kV_8 \geq 3kV_8\] by Lemma 2. \end{proof}
\begin{lem} For $n,m>3$, $M_{h_kf^3_{n,m}}\cong M_{h_kf^3_{n+3,m}}\cong M_{h_kf^3_{n,m+3}}$. \end{lem} \begin{proof} By Proposition 1, $\interior{M}=M_{h_kf}=M_{h_kf^3_{n,m}}$ is hyperbolic. Let $\Sigma_1$ be the subsurface in $S_{0,n+m+2}$ shown in Figure 3 containing 3 punctures, and let $\tau_1$ and $\tau_2$ denote the two components of $\partial\Sigma_1$, where $\tau_1$ and $\tau_2$ are two arcs connecting $x$ and $z$, with $\tau_2=f^3_{n,m}(\tau_1)$.
Construct a surface $\Sigma_2\subset M$ as follows. First, define a map \[\eta=(\eta_1,\eta_2):\Sigma_1\rightarrow S\times[0,1]\] so that $\eta(\Sigma_1)\cap S\times\{0\}=\tau_2\times\{0\}$, $\eta(\Sigma_1)\cap S\times\{1\}=\tau_1\times\{1\}$ and $\eta_1$ is the inclusion of $\Sigma_1$ into $S$. Since $f(\tau_1)=\tau_2$, if we project $p:S\times[0,1] \rightarrow M_f$, $\eta$ defines an embedding of $\Sigma_1/(\tau_1\isEquivTo{f} \tau_2)$, that is, $\Sigma_1$ with $\tau_1$ glued to $\tau_2$ by $f$. Since $\eta_1$ is the inclusion, $\Sigma_2=p\circ \eta(\Sigma_1/\tau_1\isEquivTo{f} \tau_2)$ is transverse to the suspension flow. By Theorem \ref{fried}, $[\Sigma_2] \in \overline{F\cdot \mathbb{R}^+}$.
We will define a surface $S\textprime$ such that $[S\textprime]=[S]+[\Sigma_2]$ in $H^1(M_f)$ as follows. Let $S_{\tau_2}$ denote the surface obtained by cutting $S$ along $\tau_2$. Then $S_{\tau_2}$ has two boundary components, denote $\tau^+_2, \tau^-_2$. Since $\tau_2=p\circ\eta(\Sigma_1)$, and $p\circ\eta(\tau_1)=p\circ\eta(\tau_2)=\tau_2 \subset S \subset M_f$, we can construct $S\textprime$ in $M_f$ by gluing $\tau^+_2$ to $\eta(\tau_2)$ and $\tau^-_2$ to $\eta(\tau_1)$, perturbed slightly to be embedded. Then $[S\textprime]=[S]+[\Sigma_2]$ and $S\textprime \pitchfork \Psi$. So $S\textprime$ is a fiber representing a class in $F\cdot \mathbb{R}^+ \subset H^1(M)$. By Theorem \ref{fried}, the first return map of $\psi$ is the monodromy $f\textprime:S\textprime \rightarrow S\textprime$. This is given by \[ f\textprime(x)=
\begin{cases}
\eta(x) & \text{if } x \in \Sigma_1 \\
f\circ\eta^{-1}(x) & \text{if } x \in \eta(\Sigma_1)\\
f(x) & \text{otherwise}
\end{cases} \] See Figure 5. As indicated by Figure 6, $S\textprime \cong S_{0,n+m+5}$, and up to conjugation, $f\textprime=f^3_{n+3,m}$. Therefore, $M_{h_kf^3_{n,m}}\cong M_{h_kf^3_{n+3,m}}$. Similarly, if we pick another subsurface in $Y$ homeomorphic to $\Sigma_0$, one can show $M_{h_kf^3_{n,m}}\cong M_{h_kf^3_{n,m+3}}$.
\begin{figure}
\caption{Obtain $\Sigma_2$ from $\eta:\Sigma_1\rightarrow S\times[0,1]$ and $S\textprime$ from $S$ and $\Sigma_2$ as shown.}
\end{figure}
\begin{figure}
\caption{Left: $S$. Right: $S\textprime$}
\end{figure}
\end{proof}
\begin{lem} For fixed $k$, and fixed $u_i,v_i\geq B_k$ (the constant from Proposition 3), there exists $R>0$ so that if $n=m\geq R$, then $h_kf^3_{n,n}: S_{0,2n+2}\to S_{0,2n+2}$ has $\log\lambda(h_kf^3_{n,n})\leq 54\frac{\log 2n+2}{2n+2}$. \end{lem} \begin{proof} We can get the spine G as in Figure 7 on $S_{0,n+m+2}$. This is in fact a train track for $f_{n,m}$, as described in \cite{hironaka}, and hence also $f$. Then $f$ induces a map $g:G\to G$.
\begin{figure}
\caption{Spine of $S_{0,n+m+2}$ when $n=m=8$}
\end{figure}
The graph $G$ contains the loop edges $a_1, a_2, \dots, a_n$, and $a\textprime_2, a\textprime_3, \dots, a\textprime_m$, which $g$ acts on as a permutation, and ``peripheral" edges $b_1, b_2, \dots, b_n$, and $b\textprime_1, b\textprime_2, \dots, b\textprime_m$, which $g$ also acts on them as a permutation. The transition matrix has the following form: \[ T= \left[
\begin{array}{c|c} A & *\\ \hline 0 & P\\ \end{array} \right] \] where $P$ corresponds to $e_1, e_2, \dots, e_n$, $e\textprime_1, e\textprime_2, \dots, e\textprime_m$. The matrix $A$ is a permutation matrix corresponds to $a_1, a_2, \dots, a_n$, $a\textprime_1, a\textprime_2, \dots, a\textprime_m$, $b_1, b_2, \dots, b_n$, $b\textprime_1, b\textprime_2, \dots, b\textprime_m$. So the largest eigenvalue of $T$ (in absolute value) will be the largest eigenvalue of $P$. If we remove all the non-contributing edges, we have \[ \begin{array}{rcl} e_i & \to & e_{i+3} \mbox{ } \mbox{ for } 1\leq i\leq n-3 \\ e\textprime_i & \to & e\textprime_{i+3} \mbox{ } \mbox{ for } 1< i\leq m-2 \\ e\textprime_1 & \to & e\textprime_4e\textprime_4e\textprime_3e\textprime_3e\textprime_2e\textprime_2e\textprime_1e_1e_2e_2e_3e_3e_4 \\ e_n & \to & e_3e_3e_2e_2e_1e\textprime_1e\textprime_2e\textprime_2e\textprime_3e\textprime_3e\textprime_4 \\ e\textprime_m & \to & e\textprime_3e\textprime_3e\textprime_2e\textprime_2e\textprime_1e_1e_2e_2e_3 \\ e_{n-1} & \to & e_2e_2e_1e\textprime_1e\textprime_2e\textprime_2e\textprime_3 \\ e\textprime_{m-1} & \to & e\textprime_2e\textprime_2e\textprime_1e_1e_2 \\ e_{n-2} & \to & e_1e\textprime_1e\textprime_2 \end{array} \]
\begin{figure}
\caption{The directed graph $\Gamma$ associated to $f$.}
\end{figure}
\begin{figure}
\caption{$D$: edges marked thick denote two directed edges between corresponding vertices}
\end{figure}
Assume $n=m$, we get the directed graph $\Gamma$ associated to $f$ (or $g$) and $T$ (with only the contributing edges) as shown in Figure 8. The graph is made of 6 big ``loops" going clockwise, together with a subgraph $D$. The subgraph $D$ is given by the relations determined by $g$ above, as shown in Figure 9, containing one loop at $e\textprime _1$. For simplicity, the graph of $D$ in Figure 9 omits the arrows in between. All edges with omitted arrows implicitly point from left to right. The edges marked thick mean there are two edges connecting those vertices. Thus, a path with given length passing through $D$ once will either \begin{itemize} \item directly go from left to right with length 1. \item go from left to $e\textprime_1$, then wrap around the loop at $e\textprime_1$ some number of times, then go to the right. \item pass $e_1$ and go to $e_4$. \end{itemize} Given two vertices, the number of paths of length $\frac{n}{13}$ between them which passes through $D$ is therefore at most 2.
Now we let $\Sigma_0$ surround $V_{\lfloor \frac{n}{2} \rfloor-1}, V_{\lfloor \frac{n}{2} \rfloor},V_{\lfloor \frac{n}{2} \rfloor+1}$, fix $h_k$ and consider a graph map $g_k\simeq h_kf$ and its matrix $T_k$. Note that $h_k$ is supported in a neighborhood of $\Sigma_0$. Let $a_j, a_{j+1}, a_{j+2}$ denote the three loops wrapping around the three punctures in $\Sigma_0$. If we remove all the non-contributing edges, after homotopy, $h_k$ sends $e_j,e_{j+1}, e_{j+2}$ to a combination of $e_j,e_{j+1}, e_{j+2}$ without acting on other edges. Thus $g_k\simeq h_kf$ sends $e_{j-3},e_{j-2}, e_{j-1}$ to a combination of $e_j,e_{j+1}, e_{j+2}$ and acts on the rest of the edges as $g\simeq f$ does.
Then we get the directed graph $\Gamma_k$ associated to $T_k$ and $g_k$ as shown in Figure 10. The graph $\Gamma_k$ is the same as $\Gamma$ away from $e_{j-3},e_{j-2}, e_{j-1},e_j,e_{j+1}, e_{j+2}$, and has a subgraph $D_k$ given by $h_k$. The subgraph $D_k$ is a bipartite graph with 3 vertices in each set, $\{e_j,e_{j+1}, e_{j+2}\}$ and $\{e_{j-3},e_{j-2}, e_{j-1\}}$. All edges of $D_k$ point from right to left, from $\{e_{j-3},e_{j-2}, e_{j-1}\}$ to $\{e_j,e_{j+1}, e_{j+2}\}$. The number of edges between any two vertices in different sets is bounded above by some $E_k>0$ depending on $h_k$. See Figure 11.
When $n=m$ is big enough, any path of length $\frac{n}{13}$ can't intersect $D$ and $D_k$ simultaneously. Thus given any two vertices, the number of paths of length $\frac{n}{13}$ between those vertices is bounded above by $N_k=\max\{2, E_k\}$. The number of paths of length $\frac{n}{13}$ emanating from a given vertex is thus at most $2nN_k$. Then for $\lambda_0$, the leading eigenvalue of $T_k$, by Proposition \ref{pf}, we have \[\log \lambda_0 \leq \frac{\log 2nN_k}{\frac{n}{13}}.\]
When $n>N_k$ is large enough, we have \[ \log \lambda_0 \leq \frac{\log 2nN_k}{\frac{n}{13}} < \frac{2\log (2n+2)}{\frac{2n}{26}} < \frac{2\log (2n+2)}{\frac{2n+2}{27}}=54\frac{\log (2n+2)}{2n+2}. \] The result follows since $\lambda(h_kf)\leq \lambda_0$. \end{proof} \begin{figure}
\caption{The directed graph $\Gamma_k$ associated to $h_kf$.}
\end{figure} \begin{figure}
\caption{$D_k$: each directed edge in between represent $\leq E_k$ directed edge.}
\end{figure}
Now we finish Theorem \ref{main}. Part (1) is given by Lemma 2. Part (2) is given by Proposition 3. Part (3) is given by Lemma 4.
\end{document} | arXiv |
Robust volcano plot: identification of differential metabolites in the presence of outliers
Nishith Kumar1,2,
Md. Aminul Hoque1 &
Masahiro Sugimoto3
The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers.
We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites.
Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano.
In bioinformatics, molecular omics studies- like genomics, transcriptomics, proteomics and metabolomics are playing a prominent role in life sciences, health and biological research [1]. Among these approaches, metabolomics is frequently used to understand biological metabolic status, making a direct link between genotypes and phenotypes [2]. Many metabolomics-based biomarker discoveries have explored the key metabolites to discriminate between metabolic diseases, such as diabetes, cardiovascular diseases, and cancers [3]. The metabolites showing different concentrations among the given groups (e.g. healthy and disease subjects) is called as differential metabolites. Combinations of these metabolites can be used to identify subjects with a high risk of suffering from diabetes [4]. Thus, one of the most important tasks of metabolomics research is to identify a differential metabolite or a set of differential metabolites which have ability to differentiate patients with a disease from healthy subjects. The accurate identification of differential metabolites, or molecules that reflect a specific phenotype, is a cornerstone of many applications, such as predicting disease status and drug discovery [5,6,7,8].
To generate high-throughput metabolomics data, nuclear magnetic resonance (NMR) and hyphenated mass spectrometry (MS), such as gas chromatography-MS (GC-MS) and liquid chromatography-MS (LC-MS), are commonly used. These platforms can simultaneously identify and quantify hundreds of metabolites. All these analytical platforms can result in missing values in the observed data and outliers, which are caused by various reasons including analytical, experimental, and human errors, low quality measurements, malfunctioning equipment, and overlapping signals [9,10,11,12,13,14,15,16,17,18,19,20]. Thus, subsequent metabolomics data analysis should consider the presence of these problems in the given data.
Four types of statistical procedure have primarily been used to identify differential metabolites: (i) classical parametric approaches, such as Student's t-test [21], classical volcano plot (CVP) [22] and fold change rank ordering statistics (FCROS) [23], (ii) classical non-parametric approaches, such as significance analysis of microarrays (SAM) [24], and the Wilcoxon [25] and Kruskal-Wallis (K-W) [26] tests, (iii) Bayesian parametric approaches, such as Bayesian robust inference for differential gene expression (BRIDGE) [27], empirical Bayes methods for microarrays (EBarrays) [28], and linear models for microarrays (Limma) [29], and (iv) Bayesian non-parametric approaches [30, 31]. In classical procedures, differential metabolites are identified using p-values (significance levels) that are estimated based on the distribution of a test statistic or a permutation, whereas in Bayesian procedures, differential metabolites are identified using posterior probabilities. However, most of the aforementioned techniques are not robust against outliers [27, 32]. Thus, they may produce misleading results in the presence of outlying samples or irregular concentrations of metabolites. Moreover, outlying samples or irregular concentrations of metabolites may violate the normality assumption in metabolomics datasets. Several nonparametric approaches (Wilcoxon and K-W test) and some Bayesian approaches (BRIDGE and Robust limma) are robust against outliers; however, increases in the number of outliers in these techniques reduce the accuracy of differential metabolite identification. One of the easiest ways to overcome this problem is to delete the outlying metabolites or outlying samples from the dataset. However, the deleted metabolites may be important metabolites in some cases, while deleting samples and metabolites can make the dataset much smaller or even vanish.
Comparatively, CVP [22] is a good technique for identifying differential metabolites because it can control the false discovery rate [33]. The volcano plot is based on p-values from a t-test and fold-change (FC) values [34], both of which depend on classical location and scatter, and thus volcano plot is affected by outliers. Therefore, in this paper, we develop an outlier-robust volcano plot by unifying CVP and a kernel weight function to overcome the problem of outliers. The advantage of the proposed method compared to existing methods is that it performs considerably better in the presence of outliers. We introduced a kernel weight function, which plays a key role in the performance of the proposed method. Robust volcano plot ensures robustness by producing smaller weights for outlying observations from the kernel weight function. Appropriate selection of the tuning parameter for the kernel function also improves the performance of the proposed method, as discussed later.
Since metabolomics dataset frequently contains outliers and all of the existing differential metabolite identification techniques are more or less influenced by outliers; as a result, outliers reduce the accuracy of differential metabolite identification. Therefore, in this paper we develop a kernel weight based outlier-robust volcano plot for detecting differential metabolites from metabolomics datasets in the presence of outliers. To measure the performance of the proposed method in comparison with other techniques, we consider nine existing differential metabolite identification techniques: three classical parametric approaches (t-test, FCROS, CVP), three nonparametric approaches (Wilcoxon test, K-W test, SAM) and three Bayesian approaches (BRIDGE, Limma, EBarrays). We also evaluate the performance of the proposed method using both artificially generated and experimentally measured metabolomics datasets in the absence and presence of outliers. Every metabolite identification method has a specific cutoff and its choices are sensitive to determine the metabolite identification which has large effect on the statistical analyses. In this paper, the cutoff of t-test, SAM, Wilcoxon test and K-W test have been taken as Bonferroni corrected p-value < 0.05. According to Dembélé et al. [23] we declared those metabolites as differential whose f-value is close to 0 or 1 and if f-value is close to 0.5 we took those metabolites as non differential. For CVP, a metabolite was said to be differential if p-value < 0.05 and | log2 (fold-change) | > 1. For Bayesian approaches we took the cutoff of Bonferroni corrected posterior probabilities > 0.95.
In this paper, a kernel weight based outlier-robust volcano plot is developed for detecting differential metabolites. To reduce the family wise error rate when comparing the performance of the proposed method with existing differential metabolite identification techniques, the p-values are adjusted using Bonferroni correction. The algorithm for outlier-robust volcano plot is given below.
Outlier-robust volcano plot (proposed)
We extend volcano plot by introducing a kernel weight function behind CVP. Classical volcano plot identifies differential metabolites using the t-test and fold-change (FC) methods, and plots log2 (fold-change) on the X-axis against -log10 (p-value) from the t-test on the Y-axis. Because the t-statistic depends on mean and variance and fold-change depends on mean, CVP is heavily influenced by outliers. Therefore, we use the kernel weighted average and variance instead of the classical mean and variance in the t-statistic and fold-change functions, and also plot log 2 fold-change on the X-axis and -log10 (p-value) from the t-test on the Y-axis. We refer to this procedure as robust volcano plot (RVP).
Let X = (x ij ); i = 1, 2, …, p and j = 1, 2, …, n, be a metabolomics data matrix with p metabolites and n samples. The rows and columns of X represent the metabolites and samples, respectively. In metabolomics data analysis, differential metabolites are the metabolites that show different concentrations between two groups (disease and control) of samples in a metabolomics dataset. According to the control and disease groups, the dataset can be expressed as
$$ X=\left[\begin{array}{c}\overset{Control}{\overbrace{x_{11}\kern0.5em {x}_{12}\kern0.5em \cdots \kern0.5em {x}_{1{g}_1}}}\\ {}{x}_{21}\kern0.5em {x}_{22}\kern0.5em \cdots \kern0.5em {x}_{2{g}_1}\\ {}\begin{array}{cccc}\vdots \kern0.75em & \vdots \kern0.5em & \ddots & \vdots \end{array}\\ {}\begin{array}{cccc}{x}_{p1}& {x}_{p2}& \cdots & {x}_{pg_1}\end{array}\end{array}\kern0.5em \begin{array}{c}\overset{Disease}{\overbrace{\begin{array}{cccc}{x}_{1\left({g}_1+1\right)}& {x}_{1\left({g}_1+2\right)}& \cdots & {x}_{1n}\end{array}}}\\ {}\begin{array}{cccc}{x}_{2\left({g}_1+1\right)}& {x}_{2\left({g}_1+2\right)}& \cdots & {x}_{2n}\end{array}\\ {}\begin{array}{cccc}\vdots \kern1.5em & \vdots \kern1.5em & \ddots & \vdots \end{array}\\ {}\begin{array}{cccc}{x}_{p\left({g}_1+1\right)}& {x}_{p\left({g}_1+2\right)}& \cdots & {x}_{pn}\end{array}\end{array}\right], $$
where g1 is the number of subjects in the control group and (n − g1) is the number of subjects in the disease group. In CVP, log 2 (fold-change) and -log10(p-value) from the t-test are calculated as follows.
The log 2 (fold-change) value for the i-th metabolite is
$$ {\log}_2\left({FC}_i\right)={\log}_2\left(\frac{{\overline{X}}_i^D}{{\overline{X}}_i^C}\right), $$
where \( {\overline{X}}_i^D \) represents the average intensity of the i-th metabolite for the disease group and \( {\overline{X}}_i^C \) represents the average intensity of the i-th metabolite for the control group.
The t-statistic for testing the hypothesis that the i-th metabolite is not differential, i.e.
$$ {H}_0:\kern0.75em {\mu}_i^C={\mu}_i^D\kern1.5em \mathrm{against}\kern1.5em {H}_1:\kern1em {\mu}_i^C\ne {\mu}_i^D, $$
for σ iC 2 = σ iD 2 is
$$ t=\frac{{\overline{X}}_i^C-{\overline{X}}_i^D}{\sqrt{S_i^2\left(\frac{1}{g_1}+\frac{1}{n-{g}_1}\right)}} $$
$$ {\displaystyle \begin{array}{l}{\overline{X}}_i^C=\frac{\sum \limits_{j=1}^{g_1}{x}_{ij}}{g_1};\kern1.25em {\overline{X}}_i^D=\frac{\sum \limits_{j={g}_1+1}^n{x}_{ij}}{n-{g}_1};\kern0.5em {S}_{iC}^2=\frac{1}{g_1-1}\sum \limits_{j=1}^{g_1}{\left({x}_{ij}-{\overline{X}}_i^C\right)}^2\kern0.75em ;\kern0.5em {S}_{iD}^2=\frac{1}{n-{g}_1-1}\sum \limits_{j={g}_1+1}^n{\left({x}_{ij}-{\overline{X}}_i^D\right)}^2\\ {};{S}_i^2=\frac{\left({g}_1-1\right){S}_{iC}^2+\left(n-{g}_1-1\right){S}_{iD}^2}{n-2}.\end{array}} $$
The value from eq. (2) is compared with Student's t-value with n − 2 degrees of freedom.
If (σ iC 2 ≠ σ iD 2) then the test statistic is
$$ t=\frac{{\overline{X}}_i^C-{\overline{X}}_i^D}{\sqrt{\left(\frac{S_{iC}^2}{g_1}+\frac{S_{iD}^2}{n-{g}_1}\right)}} $$
In both cases, the p-value is calculated using
$$ p-\mathrm{value}=\underset{t_{calculated}}{\overset{\infty }{\int }}f(t) dt $$
In CVP, the FC value from (1) and t-value from (2) or (3) are calculated using the classical mean and variance. Because the classical mean and variance are influenced by outliers, we propose RVP using the weighted mean and variance instead of the classical mean and variance. For the weighted mean and variance, we use the kernel weight function \( {w}_j=\exp \left\{-\frac{\lambda }{2{\left( mad\left({x}_j\right)\right)}^2}{\left({x}_{ij}- median\left({x}_j\right)\right)}^2\right\}, \) where mad is the median absolute deviation. The value of this weight function lies between zero and one, and is close to zero if the observation is far from the median and close to one if the observation is close to the median. In the weight function, the tuning parameter λ is selected using k-fold cross validation (Fig. 1 summarizes the λ selection procedure). If the dataset does not contain outliers, then the value of λ is zero and all the weights are equal to 1, so the method is the same as the classical approach. The steps for RVP are given below.
Step − 1. Calculate log 2 (fold change) for the i-th metabolite as \( {\log}_2\left({FC}_i\right)={\log}_2\left(\frac{{\overline{X}}_i^D}{{\overline{X}}_i^C}\right), \) where \( {\overline{X}}_i^D=\sum \limits_{j={g}_1+1}^n{w}_j{x}_{ij}/\left(n-{g}_1\right) \) represents the weighted average intensity of the i-th metabolite for the disease group and \( {\overline{X}}_i^C=\sum \limits_{j=1}^{g_1}{w}_j{x}_{ij}/{g}_1 \) represents the weighted average intensity of the i-th metabolite for the control group.
Step − 2. Using the weighted average and weighted variance instead of the classical mean and variance, calculate -log10(p-value) for the i-th metabolite from the t-test using eqs. (2), (3) and (4), where \( {S}_{iC}^2=\sum \limits_{j=1}^{g_1}{w}_j{\left({x}_{ij}-{\overline{X}}_i^C\right)}^2/\left({g}_1-1\right) \) and \( {S}_{iD}^2=\sum \limits_{j={g}_1+1}^n{w}_j{\left({x}_{ij}-{\overline{X}}_i^D\right)}^2/\left(n-{g}_1-1\right). \)
Step − 3. Draw a scatter plot with log2 (fold-change) on the X-axis and -log10 (p-value) from the t-test on the Y-axis. This plot is considered to be an outlier-robust volcano plot (RVP). A metabolite is said to be differential if p-value < 0.05 and | log2 (fold-change) | > 1.
Flowchart of λ selection procedure
The R package of the proposed method with its user manual is available at https://github.com/nishithkumarpaul/Rvolcano.
Any user can install the "Rvolcano" package in R platform from the GitHub using the following code
library(devtools)
install_github("Rvolcano","nishithkumarpaul")
library(Rvolcano)
To draw the robust volcano plot using the package, the user manual is available at GitHub website.
In this paper, we use an artificially generated dataset and an experimentally measured metabolomics dataset to evaluate the performance of the proposed method in comparison with nine other methods.
Artificial data
In this study, as in [6], we generate an artificial metabolomics dataset using a one-way ANOVA model y ijk = μ i + g ij + ∈ ijk , where y ijk is the intensity of the ith metabolite, jth group and kth sample, μ i denotes the overall intensity of metabolite i, g ij is the jth group effect for the ith metabolite, and ∈ ijk is a random error term. In this linear model, μ i ~ uniform (10, 20) and ∈ ijk ~ N(0,1). The disease and control group effects for increased concentrations of metabolites are g ij = N(4, 1) and g ij = N(2, 1), respectively; for decreased concentrations of metabolites, we use g ij = N(2, 1) and g ij = N(4, 1) for the disease and control groups, respectively. Both group effects for non-differential (equal concentration) metabolites are g ij = N(0, 1). To create the artificial metabolomics dataset, we designated 130 metabolites as non-differential and 20 metabolites as differential (having differential concentrations). The dataset contained 70 subjects with 40 subjects in group-1 and 30 subjects in group-2. To investigate the performance of the proposed method under different conditions, outliers were randomly distributed in the artificially generated data matrix at different rates (5%, 10%, 15%, 20%, and 25%). Note that these outliers can fall anywhere in the data matrix. The outliers for the i-th metabolite were taken from a normal distribution with mean 3*μ i and variance \( {\sigma}_i^2 \), i.e. N (3*μ i , \( {\sigma}_i^2 \)), where μi and \( {\sigma}_i^2 \) are the mean and variance of the i-th metabolite. In total, 500 artificial datasets were generated for each condition, and the performance of the proposed method was evaluated using these datasets.
Experimentally measured data
In this paper, we considered a well-known publicly available metabolomics dataset for breast cancer serum data and control serum data containing metabolite abundance level measurements from different subjects. This dataset is available from the National Institute of Health (NIH) data repository and was collected by the University of Hawaii Cancer Center under study ID ST000356. The data were measured using a gas chromatography with time-of-flight mass spectrometry (GC-TOFMS) instrument and quantified using the ChromaTOF software (v4.33, Leco Co, CA, USA). The dataset contains 134 subjects (103 breast cancer without any treatment and 31 control subjects) and 101 metabolites. Auto-scaling was used to reduce the systematic variation in the dataset. To investigate the performance of the proposed method under different conditions, we also modified the dataset by changing 5%, 10%, and 15% of the real values by N(4 × μ i , σ i 2), where μ i and σ i 2 are the mean and variance of the i-th metabolite in the breast cancer data matrix.
The performance of our proposed method was compared with nine existing methods using both the artificial and experimental datasets.
Performance evaluation based on artificially generated data
The performance of the proposed method was compared with those of nine existing methods using 500 artificial datasets. The misclassification error rates (MERs) for differential metabolite identification were calculated for each method. The true positive rate (TPR), false positive rate (FPR), true negative rate (TNR), false negative rate (FNR), the area under the receiver operating characteristic (ROC) curve (AUC), and the partial AUC (pAUC with FPR ≤ 0.2) were also calculated. The above performance indices were calculated both in the absence and presence of outliers. The average MER, AUC and pAUC values for the artificial datasets are shown in the Additional file 1: Table S1. A method with a lower MER value and higher AUC and pAUC values is considered better. From Additional file 1: Table S1, we observe that our proposed method gave a lower MER value and higher AUC and pAUC values both in the absence and presence of outliers. We also present the ROC curves in Fig. 2 and boxplots of the 500 MER and AUC values in Figs. 3 and 4, respectively. Figure 2 shows that our proposed method gave a higher average TPR with respect to average FPR in comparison with the existing methods both in the absence of outliers and with 15% outliers. In Fig. 3, it is clear that the proposed method produced a smaller MER with minimum variability, and in Fig. 4, the proposed method gave higher AUC values with minimum variability both in the absence of outliers and with 15% outliers. To check the robustness of the different methods, we plotted the ROC curve and a boxplot of MER and AUC values for the artificial datasets for different rates (0%, 5%, 10%, 15%, 20% and 25%) of outliers. The graphs are shown in Additional file 2: Figures S1, S2 and S3. Further more, to measure the efficiency of the proposed method, we also calculated the power and false discovery rate (FDR) for small sample in both absence and presence of outliers (Additional file 1: Table S2). In Additional file 1: Table S2, the proposed method gave higher power and lower FDR in absence and presence of outliers for small sample sizes. More over, we calculated the execution time (speed of execution) in seconds of different methods including the proposed one for different number of metabolites and samples (Additional file 1: Table S3). Additional file 1: Table S3 showed that it is seen that the execution time of the proposed technique was lower than the robust Bayesian technique BRIDGE in all the cases, but this time is comperatively higher than the execution time of other techniques. This is one of the limitations of the proposed technique. Another limitation of the proposed technique is its compatibility only for the analyses between two groups. Although the proposed technique has several limitations, on the basis of above analyses of artificial datasets, we could conclude that the proposed RVP-based differential metabolite identification technique performs better than the nine existing methods.
Performance evaluation using ROC curves for different differential metabolite identification techniques a in the absence of outliers, b with 15% outliers, c zoom image of upper left region in absence of outliers, and d zoom image of upper left region with 15% outliers
Performance evaluation using box plots of 500 misclassification error rates (MERs) for different differential metabolite identification techniques a in the absence of outliers, and b with 15% outliers
Performance evaluation using box plots of 500 AUC values for different differential metabolite identification techniques a in the absence of outliers, and b with 15% outliers
Performance evaluation based on experimentally measured data
For the experimentally measured metabolomics (breast cancer) data, the performance of the proposed method was measured using differential metabolite identification from the experimental dataset and the modified experimental datasets. The experimental data was modified by artificially incorporating different rates (5%, 10% and 15%) of outliers. Methods that identified similar combinations of differential metabolites for the experimental dataset and the modified datasets were considered to be more outlier-robust. Additional file 1: Table S4 shows the number of differential metabolites identified by different methods for the original and modified datasets. While Additional file 1: Table S4 shows any differences in the number of differential metabolites identified by each method for the datasets in the absence and presence of outliers, we also need to know which methods identified similar combinations of differential metabolites. We use Venn diagrams to find methods that gave similar combinations of differential metabolites in the absence and presence of outliers. From Additional file 1: Table S4, we chose three techniques (Limma, FCROS and the proposed method) according to the lowest variability in the number of differential metabolites. The corresponding Venn diagrams are presented in Fig. 5. Venn diagrams for all methods are given in the Additional file 2: Figure S4. From Fig. 5, we observe that our proposed method produced similar combinations of differential metabolites in the absence and presence of outliers. Therefore, we conclude that our proposed method performs better than the nine existing techniques.
Performance evaluation using Venn diagrams for the number of differential metabolites identified by a RVP, b Limma, and c FCROS for the experimental dataset
Because we modified CVP to create an outlier-robust version called RVP, we also examine the results of these two methods for the experimental data. For the experimental dataset, CVP identified 36 metabolites as differential, whereas our proposed RVP identified 37 metabolites as differential (Fig. 6). The same 36 metabolites were identified by both methods, while RVP also identified cyclohexanone. From reviewing the literature, we found that cyclohexanone is a metabolite that is associated with breast cancer as well as several other cancer diseases (Table 1) [35,36,37,38,39,40]. This suggests that our method is more reliable for differential metabolite identification.
Differential metabolite identification for the experimental dataset using a CVP, and b RVP
Table 1 Literature review of cyclohexanone metabolite associated diseases
Sometimes, a set of metabolites may show the same pattern of behavior, in that if one of them is differential then the whole set is identified as differential. To identify the potential biomarkers from the 37 differential metabolites identified by RVP, we clustered the differential metabolites using hierarchical clustering (Fig. 7) and found the most important metabolites in each cluster (the importance score is calculated using a support vector machine (SVM) classifier with radial basis kernel function) (Fig. 7). From Fig. 7 (b), we obtained four clusters and chose the most important metabolite from each cluster according to the importance score in Fig. 7 (c). For the first cluster in Fig. 7 (b), there are 15 metabolites of which the most important is glutamate. Similarly, ethanolamine is the most important metabolite for the second cluster, pyruvic acid for the third cluster and cyclohexanone for the fourth cluster. These four metabolites (glutamate, ethanolamine, pyruvic acid, and cyclohexanone) may thus be biomarkers for breast cancer. Laboratory-based targeted metabolomics analysis to test this hypothesis could be an avenue for future research.
Metabolomic biomarker identification for breast cancer. a Heatmap plot of up-regulation and down-regulation for the 37 differential metabolites identified by the proposed method (red indicates cancer samples and blue indicates control samples). b Clustering of the 37 differential metabolites for the experimental dataset. Hierarchical clustering was used after normalizing the experimentally measured breast cancer dataset by auto-scaling. c Ranking of the 37 differential metabolites according to the importance score calculated using an SVM classifier with radial basis kernel function
Outlying observations weaken the performance of existing differential metabolite identification techniques. In this paper, we have proposed a new outlier-robust differential metabolite identification technique for identifying differential metabolites in the presence of outliers. To investigate the performance of our proposed method, we analyzed artificial data and experimental data in the absence and presence of outliers. We also compared the performance of our proposed method with nine existing differential metabolite identification techniques using the ROC curve, and the average MER, AUC and pAUC values. Both the artificial and experimental data analysis show that our proposed method performed better. The proposed RVP also identified an additional metabolite (cyclohexanone) that was overlooked by CVP, and it has been shown that this metabolite is associated with several cancer diseases. We recommend using the proposed method to identify differential metabolites from noisy metabolomics datasets.
ANOVA:
Area under the ROC curve
Bayesian robust inference for differential gene expression
CVP:
Classical volcano plot
Fold change
FCROS:
Fold change rank ordering statistics
FNR:
False negative rate
FPR:
False positive rates
GC-MS:
Gas chromatography mass spectrometry
GC-TOFMS:
Gas chromatography with time-of-flight mass spectrometry
Kruskal–Wallis
LC-MS:
Liquid chromatography mass spectrometry
Limma:
Linear models for microarray
Misclassification error rate
NIH:
pAUC:
Partial area under the ROC curve
RVP:
Robust volcano plot
SAM:
Significant analysis of microarray
TNR:
True negative rate
TPR:
True positive rates
Gieger C, Geistlinger L, Altmaier E, De Angelis MH, Kronenberg F, Meitinger T, Mewes HW, Wichmann HE, Weinberger KM, Adamski J, Illig T. Genetics meets metabolomics: a genome-wide association study of metabolite profiles in human serum. PLoS Genet. 2008;4(11):e1000282.
Fiehn O. Metabolomics—the link between genotypes and phenotypes. In: Functional Genomics. Netherlands: Springer; 2002. p. 155–71.
Newgard CB. Metabolomics and metabolic diseases: where do we stand? Cell Metab. 2017;25(1):43–56.
Wang TJ, Larson MG, Vasan RS, Cheng S, Rhee EP, McCabe E, Lewis GD, Fox CS, Jacques PF, Fernandez C, O'donnell CJ. Metabolite profiles and the risk of developing diabetes. Nat Med. 2011;17(4):448–53.
Sumner LW, Mendes P, Dixon RA. Plant metabolomics: large-scale phytochemistry in the functional genomics era. Phytochemistry. 2003;62(6):817–36.
Zhan X, Patterson AD, Ghosh D. Kernel approaches for differential expression analysis of mass spectrometry-based metabolomics data. BMC Bioinformatics. 2015;16(1):77.
Mamas M, Dunn WB, Neyses L, Goodacre R. The role of metabolites and metabolomics in clinically applicable biomarkers of disease. Arch Toxicol. 2011;85(1):5–17.
Trusheim MR, Berndt ER, Douglas FL. Stratified medicine: strategic and economic implications of combining drugs and clinical biomarkers. Nat Rev Drug Discov. 2007;6(4):287–93.
Karpievitch YV, Dabney AR, Smith RD. Normalization and missing value imputation for label-free LC-MS analysis. BMC Bioinformatics. 2012;13(16):1–9.
Hrydziuszko O, Viant MR. Missing values in mass spectrometry based metabolomics: an undervalued step in the data processing pipeline. Metabolomics. 2012;8(1):161–74.
Armitage EG, Godzien J, Alonso-Herranz V, López-Gonzálvez Á, Barbas C. Missing value imputation strategies for metabolomics data. Electrophoresis. 2015;36(24):3050–60.
Gromski PS, Xu Y, Kotze HL, Correa E, Ellis DI, Armitage EG, Turner ML, Goodacre R. Influence of missing values substitutes on multivariate analysis of metabolomics data. Meta. 2014;4(2):433–52.
Yang J, Zhao X, Lu X, Lin X, Xu G. A data preprocessing strategy for metabolomics to reduce the mask effect in data analysis. Front Mol Biosci. 2015;2:1–9.
Steuer R, Morgenthal K, Weckwerth W, Selbig J. A gentle guide to the analysis of metabolomic data. In: Metabolomics: Methods and Protocols; 2007. p. 105–26.
DeHaven CD, Evans AM, Dai H, Lawton KA. Organization of GC/MS and LC/MS metabolomics data into chemical libraries. J Cheminform. 2010;2(1):1–12.
Godzien J, Ciborowski M, Angulo S, Barbas C. From numbers to a biological sense: How the strategy chosen for metabolomics data treatment may affect final results. A practical example based on urine fingerprints obtained by LC-MS. Electrophoresis. 2013;34(19):2812–26.
Blanchet L, Smolinska A. Data fusion in metabolomics and proteomics for biomarker discovery. In: Statistical Analysis in Proteomics; 2016. p. 209–23.
Kumar N, Hoque MA, Shahjaman M, Islam SMS, Mollah MNH. Metabolomic biomarker identification in presence of outliers and missing values. Biomed Res Int. 2017;2017:1–11.
Snyder MN, Henderson WM, Glinski DA, Purucker ST. Biomarker analysis of American toad (Anaxyrus americanus) and grey tree frog (Hyla versicolor) tadpoles following exposure to atrazine. Aquat Toxicol. 2017;182:184–93.
Bordbar A, Yurkovich JT, Paglia G, Rolfsson O, Sigurjónsson ÓE, Palsson BO. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics. Sci Rep. 2017;7:1–12.
Fan Y, Zhou X, Xia TS, Chen Z, Li J, Liu Q, Alolga RN, Chen Y, Lai MD, Li P, Zhu W. Human plasma metabolomics for identifying differential metabolites and predicting molecular subtypes of breast cancer. Oncotarget. 2016;7(9):9925–38.
Li W. Volcano plots in analyzing differential expressions with mRNA microarrays. J Bioinforma Comput Biol. 2012;10(06):1231003.
Dembélé D, Kastner P. Fold change rank ordering statistics: a new method for detecting differentially expressed genes. BMC Bioinformatics. 2014;15(1):14.
Tusher VG, Tibshirani R, Chu G. Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci. 2001;98(9):5116–21.
McMillan A, Renaud JB, Gloor GB, Reid G, Sumarah MW. Post-acquisition filtering of salt cluster artefacts for LC-MS based human metabolomic studies. J Cheminform. 2016;8(1):44.
Wang C, Sun B, Guo L, Wang X, Ke C, Liu S, Zhao W, Luo S, Guo Z, Zhang Y, Xu G. Volatile organic metabolites identify patients with breast cancer, cyclomastopathy, and mammary gland fibroma. Sci Rep. 2014;4:1–6.
Gottardo R, Raftery AE, Yee Yeung KA, Bumgarner RE. Bayesian robust inference for differential gene expression in microarrays with multiple samples. Biometrics. 2006;62(1):10–8.
Kendziorski CM, Newton MA, Lan H, Gould M. On parametric empirical Bayes methods for comparing multiple groups using replicated gene expression profiles. Stat Med. 2003;22(24):3899–914.
Smyth GK. Limma: linear models for microarray data. In: Bioinformatics and computational biology solutions using R and Bioconductor. New York: Springer; 2005. p. 397–420.
Efron B, Tibshirani R, Storey JD, Tusher V. Empirical Bayes analysis of a microarray experiment. J Am Stat Assoc. 2001;96(456):1151–60.
Do KA, Müller P, Tang F. A Bayesian mixture model for differential gene expression. J R Stat Soc: Ser C: Appl Stat. 2005;54(3):627–44.
Mollah MM, Mollah MN, Kishino H. β-empirical Bayes inference and model diagnosis of microarray data. BMC Bioinformatics. 2012;13(1):135.
Jung K, Friede T, Beißbarth T. Reporting FDR analogous confidence intervals for the log fold change of differentially expressed genes. BMC Bioinformatics. 2011;12(1):288.
Zhang S, Cao J. A close examination of double filtering with fold change and t test in microarray analysis. BMC Bioinformatics. 2009;10(1):402.
Westhoff M, Litterst P, Maddula S, Bödeker B, Rahmann S, Davies AN, Baumbach JI. Differentiation of chronic obstructive pulmonary disease (COPD) including lung cancer from healthy control group by breath analysis using ion mobility spectrometry. Int J Ion Mobil Spectrom. 2010;13(3–4):131–9.
Wei X, Du ZY, Cui XX, Verano M, Mo RQ, Tang ZK, Conney AH, Zheng X, Zhang K. Effects of cyclohexanone analogues of curcumin on growth, apoptosis and NF-κB activity in PC-3 human prostate cancer cells. Oncol Lett. 2012;4(2):279–84.
Leung E, Rewcastle GW, Joseph WR, Rosengren RJ, Larsen L, Baguley BC. Identification of cyclohexanone derivatives that act as catalytic inhibitors of topoisomerase I: effects on tamoxifen-resistant MCF-7 cancer cells. Investig New Drugs. 2012;30(6):2103–12.
Mochalski P, King J, Haas M, Unterkofler K, Amann A, Mayer G. Blood and breath profiles of volatile organic compounds in patients with end-stage renal disease. BMC Nephrol. 2014;15(1):43.
Liu H, Wang H, Li C, Wang L, Pan Z, Wang L. Investigation of volatile organic metabolites in lung cancer pleural effusions by solid-phase microextraction and gas chromatography/mass spectrometry. J Chromatogr B. 2014;945:53–9.
Silva CL, Perestrelo R, Silva P, Tomás H, Câmara JS. Volatile metabolomic signature of human breast cancer cell lines. Sci Rep. 2017;7:1–8.
We thank Peter Humphries, PhD, from Edanz Group (www.edanzediting.com/ac) for editing a draft of this manuscript.
The R package of the proposed method and a detail documented user manual of the package are available at https://github.com/nishithkumarpaul/Rvolcano.
Department of Statistics, Rajshahi University, Rajshahi, Bangladesh
Nishith Kumar & Md. Aminul Hoque
Bioinformatics Lab, Department of Statistics, Bangabandhu Sheikh Mujibur Rahman Science and Technology University, Gopalganj, Bangladesh
Nishith Kumar
Health Promotion and Preemptive Medicine, Research and Development Center for Minimally Invasive Therapies, Tokyo Medical University, Shinjuku, Tokyo, 160-8402, Japan
Masahiro Sugimoto
Md. Aminul Hoque
All the authors worked together to develop the robust volcano plot technique. NK analyzed the data, drafted the manuscript, and implemented the statistical analysis. MAH and MS coordinated and supervised the project. All authors read and approved the final manuscript.
Correspondence to Nishith Kumar.
Table S1. Performance evaluations for different methods using average MER, AUC and pAUC values. Table S2. Efficiency Calculation of different techniques using power and FDR in both absence and presence of outliers for small sample sizes. For this analysis 1500 metabolites have been taken in the dataset. Table S3. Execution time calculation in seconds of different methods including the proposed one for different number of metabolites and different number of samples (Computer Configuration: Processor-Intel Core i7 3.6 GHz, RAM-16.0GB, OS- 64 bit & Windows 8). Table S4. Number of differential metabolites identified by different methods. (DOC 136 kb)
Performance evaluation of the proposed technique compared to other techniques using ROC curves and MER and AUC values for the artificial datasets in the absence and presence of outliers. Figure S1. Performance evaluation using ROC curves for different differential metabolite identification techniques (a) in the absence of outliers, (b) with 5% outliers, (c) with 10% outliers, (d) with 15% outliers, (e) with 20% outliers, and (f) with 25% outliers. Figure S2. Performance evaluation using box plots of 500 MERs for different differential metabolite identification techniques (a) in the absence of outliers, (b) with 5% outliers, (c) with 10% outliers, (d) with 15% outliers, (e) with 20% outliers, and (f) with 25% outliers. Figure S3. Performance evaluation using box plots of 500 AUC values for different differential metabolite identification techniques (a) in the absence of outliers, (b) with 5% outliers, (c) with 10% outliers, (d) with 15% outliers, (e) with 20% outliers, and (f) 25% outliers. Figure S4. Performance evaluation using Venn diagrams for the number of differential metabolites identified by different differential metabolite identification methods for the experimental dataset. (DOC 6677 kb)
Kumar, N., Hoque, M.A. & Sugimoto, M. Robust volcano plot: identification of differential metabolites in the presence of outliers. BMC Bioinformatics 19, 128 (2018). https://doi.org/10.1186/s12859-018-2117-2
Differential metabolites
Receiver operating characteristic (ROC) curve | CommonCrawl |
\begin{document}
\title{Multi-class Probabilistic Bounds for Self-learning}
\author{Vasilii Feofanov, Emilie Devijver, Massih-Reza Amini\\ \{Firstname.LastName\}@univ-grenoble-alpes.fr \\ Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG\\ Grenoble, France}
\maketitle
\begin{abstract} Self-learning is a classical approach for learning with both labeled and unlabeled observations which consists in giving pseudo-labels to unlabeled training instances with a confidence score over a predetermined threshold. At the same time, the pseudo-labeling technique is prone to error and runs the risk of adding noisy labels into unlabeled training data. In this paper, we present a probabilistic framework for analyzing self-learning in the multi-class classification scenario with partially labeled data. First, we derive a transductive bound over the risk of the multi-class majority vote classifier. Based on this result, we propose to automatically choose the threshold for pseudo-labeling that minimizes the transductive bound. Then, we introduce a mislabeling error model to analyze the error of the majority vote classifier in the case of the pseudo-labeled data. We derive a probabilistic C-bound over the majority vote error when an imperfect label is given. Empirical results on different data sets show the effectiveness of our framework compared to several state-of-the-art semi-supervised approaches. \end{abstract}
\section{Introduction}
We consider classification problems where the scarce labeled training set comes along with a huge number of unlabeled training examples.
This is for example the case in web-oriented applications where a huge number of unlabeled observations arrive sequentially, and there is not enough time to manually label them all.
In this context, the use of traditional supervised approaches trained on available labeled data usually leads to poor learning performance.
In semi-supervised learning \citep{Chapelle:2010}, it is generally assumed that unlabeled training examples contain valuable information about the prediction problem, so the aim is to exploit \textit{both} available labeled and unlabeled training observations in order to provide an improved solution.
The self-learning\footnote{It is also known as self-training or self-labeling.} \citep{Tur:2005,Amini:15} is a classical approach to classify partially labeled data in a supervised fashion, where the training set is augmented by iteratively assigning pseudo-labels to unlabeled examples with the confidence score above a certain threshold. However, fixing this threshold is a bottleneck of this approach. In reality, at every iteration, the self-learning algorithm injects some noise in labeling, so the question would be how to optimally choose the threshold to minimize the mislabeling probability.
In this paper, we tackle this problem from a theoretical point of view for the multi-class classification case and analyze the behavior of majority vote classifiers (also known as Bayes classifiers, including Random Forest \citep{Lorenzen:2019}, AdaBoost \citep{Germain:2015}, SVM \citep{Fakeri-Tabrizi:2015} and neural networks \citep{Letarte:2019}) for semi-supervised learning. The majority vote classifier is well studied in the binary case, where a classical approach is to bound the majority vote risk indirectly by twice the risk of related stochastic Gibbs classifier \citep{Langford:2003,Begin:2014}. However, the voters may compensate the errors of each other, so the majority vote risk will be much smaller than the Gibbs risk.
In the transductive setting \citep[p. 339]{Vapnik:1998}, where the aim is to correctly classify unlabeled training examples, \citet{Feofanov:2019} derived a bound for the multi-class majority vote classifier by analyzing distribution of the class vote, focusing on the class confusion matrix as an error indicator as proposed by \citet{Morvant:2012:ICML}. This bound is obtained by analytically solving a linear program and it comes out that in the case when the majority vote classifier makes most of its errors on examples with low class vote, the obtained bound is tight. This result is proposed to develop a new multi-class self-learning algorithm where the threshold is automatically found based on the proposed transductive bound.
Our paper extends this work by deriving the transductive bounds in the probabilistic framework. In this case, the transductive bound is estimated by assigning soft labels for the unlabeled set, which is more effective in practice as pointed out by \cite{Feofanov:2019}, so it bridges the gap between the theoretical analyzes and the application.
Subsequently, we theoretically analyze the behavior of the majority vote classifier after the inclusion of pseudo-labeled training examples by self-learning. Even when the threshold is optimally chosen, the pseudo-labels may still be erroneous, so the question is how to evaluate the risk in this noisy case. For this, we take explicitly into account possible mislabeling by considering a mislabeling model of \cite{Chittineni:1980}. At first, we show the connection between the classification error of the true and the imperfect label. Then, we derive a new probabilistic C-bound over the error of the multi-class majority vote classifier in the presence of imperfect labels. This bound is based on the mean and the variance of the prediction margin \citep{Lacasse:2007}, so it reflects both the individual strength of voters and their correlation in prediction.
The rest of this paper is organized as follows. Section \ref{sec:rel-work} provides an overview of the related work. In Section \ref{sec:framework} we introduce the problem statement and the proposed framework. In Section \ref{sec:tr-study} we present a probabilistic bound over the transductive risk of the multi-class majority vote classifier and describe the extended self-learning algorithm that learns the threshold using the proposed bound. Section \ref{sec:c-bound} shows how to derive the C-bound in the probabilistic framework taking into account mislabeling errors. In Section \ref{sec:num-exper}, we present empirical evidence showing that the proposed self-learning strategy is effective compared to several state-of-the-art approaches, and we illustrate the behavior of the new C-bound on real data sets. Finally, in Section \ref{sec:concl} we summarize the outcome of this study and discuss the future work.
\section{\vasilii{Related Work}} \label{sec:rel-work} Generalization guarantees of majority vote classifiers are well studied in the binary supervised setting. A common approach is to bound the majority vote risk by twice the Gibbs risk \citep{Langford:2003}. Many works are focused on deriving tight PAC guarantees for the Gibbs classifier in the inductive case \citep{McAllester:2003,Maurer:2004,Catoni:2007} as well as in the transductive one \citep{Derbeko:2004,Begin:2014}, and applying these results for optimization \citep{Thiemann:2017}, linear classifiers \citep{Germain:2009}, random forests \citep{Lorenzen:2019}, neural networks \citep{Letarte:2019}. While this bound can be tight, it reflects only the individual strength of voters, so using it as a minimization criterion often leads to an increase in the test error \citep{Masegosa:2020}. This motivates to opt for bounds that directly upper bound the majority vote error. \cite{Amini:2008} derives a transductive bound based on how voters agree on every unlabeled example, while \cite{Lacasse:2007} upper bounds the generalization error by taking additionally into account the error correlation between voters.
Only few results exist for the multi-class majority vote classifier. In the supervised setting, \citet{Morvant:2012:ICML} derives generalization guarantees on the confusion matrix' norm, whereas \citet{Laviolette:2017} extends the C-bound of \cite{Lacasse:2007} to the multi-class case. \cite{Masegosa:2020} studies tight estimations from data by deriving a relaxed version of \cite{Laviolette:2017}. In the transductive setting, \citet{Feofanov:2019} extends the bound of \cite{Amini:2008} to the multi-class case. In this paper, we show how the bounds of \citet{Feofanov:2019} and \citet{Laviolette:2017} are generalized to the probabilistic framework.
However, the aforementioned studies are limited by assuming that all training examples are perfectly labeled. Learning with an imperfect supervisor, in which training data contains an unknown portion of imperfect labels, has been considered in both supervised \citep{Natarajan:2013,Scott:2015,Xia:2019} and semi-supervised settings \citep{Amini:2003}.
In most cases, a focus is on the estimation of the mislabeling errors to train a classifier, and theoretical studies are limited by the binary case \citep{Natarajan:2013,Scott:2015}. \cite{Chittineni:1980} analyzes the connection between the true and the imperfect label in the multi-class case but only for the maximum a posteriori classifier. We extend the latter result to an arbitrary classifier and use it to derive a new C-bound with imperfect labels. To the best of our knowledge, the majority vote classifier has not been yet studied in the presence of imperfect labels.
In this paper, our theoretical development has a particular focus on semi-supervised learning. While there exists theoretical analysis of graph-based \citep{El:2009} and clustering \citep{Rigollet:2007,Maximov:2018} approaches, little attention is given to a self-learning algorithm \citep{Tur:2005}. A common approach is to perform self-learning with a fixed threshold; another method is to control the number of pseudo-labeled examples by curriculum learning \citep{Cascantebonilla:2020}. We show that this threshold can be effectively found at every iteration as a trade-off between the number of pseudo-labeled examples and the bounded transductive error evaluated on them.
\section{Framework and Definitions} \label{sec:framework}
We consider multi-class classification problems with an input space $\mathcal{X}\subset \R^d$ and an output space $\mathcal{Y}=\{1,\dots,K\}$, $K\geq 2$. We denote by $\mbf{X}=(X_1,\ldots,X_d)\in\mathcal{X}$ (resp. $Y\in\mathcal{Y}$) an input (resp. output) random variable. Considering the semi-supervised framework, we assume an available set of labeled training examples $\mathrm{Z}_{\mathcal{L}}=\{(\mathbf{x}_i,y_i)\}_{i=1}^l\in(\mathcal{X}\times\mathcal{Y})^l$, identically and independently distributed (i.i.d.) with respect to a fixed yet unknown probability distribution $P(\mbf{X}, Y)$ over $\mathcal{X}\times\mathcal{Y}$, and an available set of unlabeled training examples $\mathrm{X}_{\sss\mathcal{U}} = \{\mathbf{x}_i\}_{i=l+1}^{l+u}\in \mathcal{X}^u$ is supposed to be drawn i.i.d. from the marginal distribution $P(\mathbf{X})$, over the domain $\mathcal{X}$. Further, we denote by $\mathbf{0}_K$ the zero vector of size $K$, $\mathbf{0}_{K,K}$ is the zero matrix of size $K\times K$ and $n:=l+u$.
In this work, a fixed class of classifiers $\mathcal{H}=\{h | h:\mathcal{X} \rightarrow \mathcal{Y}\}$, called the \emph{hypothesis space}, is considered and defined without reference to the training set. Over $\mathcal{H}$, two probability distributions are introduced: the prior $P$ and the posterior $Q$ that are defined respectively before and after observing the training set. We focus on two classifiers: the \emph{$Q$-weighted majority vote classifier} (also called the Bayes classifier)\footnote{For the sake of brevity, we will tend to use the latter name, which should not be confused with other learning paradigms based on the Bayesian inference, e.g., the Bayesian statistics.} defined for all $\mathbf{x}\in \mathcal{X}$ as: \begin{equation} \label{eq:Bayes-classifier-multi-again} B_Q(\mathbf{x}):= \argmax_{c\in\{1,\ldots,K\}}\left[\E_{h\sim Q}\I{h(\mathbf{x})=c}\right], \end{equation} and, the stochastic \emph{Gibbs classifier} $G_Q$ that for every $\mathbf{x}\in\mathcal{X}$ predicts the label using a randomly chosen classifier $h\in\mathcal{H}$ according to $Q$. The former one represents a class of learning methods, where the predictions of hypotheses are aggregated using the majority vote rule scheme, while the latter one is often used to analyze the behavior of the Bayes classifier.
The goal of learning is formulated as to choose a posterior distribution $Q$ over $\mathcal{H}$ based on the training set $\mathrm{Z}_{\mathcal{L}}\cup \mathrm{X}_{\sss\mathcal{U}}$ such that the classifier $B_Q$ will have the smallest possible error value. As opposed to works of \citet{Derbeko:2004,Begin:2014,Feofanov:2019}, who considered the deterministic case where for each unlabeled example there is one and only one possible label, in this paper we consider the more general \emph{probabilistic} case assuming possibility of multiple outcomes for each example.
To measure confidence of the majority vote classifier in its prediction, the notions of {class votes} and {margin} are further considered.
Given an observation $\mathbf{x}$, we define a vector of \emph{class votes} $\mathbf{v}_\mathbf{x} = (v_Q(\mathbf{x}, c))^K_{c=1}$ where the $c$-th component corresponds to the total vote given to the class $c$: \begin{equation*} \label{eq:class-vote} v_Q(\mathbf{x}, c) := \E_{h\sim Q}\I{h(\mathbf{x})=c} = \sum_{h: h(\mathbf{x})=c} Q(h). \end{equation*}
In practice, the vote $v_Q(\mbf{x}, c)$ can be regarded as an estimation of the posterior probability $P(Y\!=\!c|\mbf{X}\!=\!\mbf{x})$; a large value indicates high confidence of the classifier that the true label of $\mathbf{x}$ is $c$. \\
Given an observation $\mathbf{x}$, its \emph{margin} is defined in the following way: \begin{align} M_Q(\mathbf{x}, y) &:= \E_{h\sim Q}\I{h(\mathbf{x})=y} - \max_{\substack{{c\in\mathcal{Y}}\\{c\neq y}}} \E_{h\sim Q}\I{h(\mathbf{x})=c} = v_Q(\mbf{x},y) - \max_{\substack{{c\in\mathcal{Y}}\\{c\neq y}}} v_Q(\mbf{x},c).\label{def:margin} \end{align} The margin measures a gap between the vote of the true class and the maximal vote among all other classes. If the value is strictly positive for an example $\mbf{x}$, then $y$ will be the output of the majority vote, so the example will be correctly classified.
\section{Probabilistic Transductive Bounds and Their Application} \label{sec:tr-study}
In this section, we derive guarantees for the multi-class majority vote classifier in the transductive setting \citep{Vapnik:1982,Vapnik:1998}, i.e., when the error is evaluated on the unlabeled set $\mathrm{X}_{\sss\mathcal{U}}$ only. The proposed bound assumes that the majority vote classifier makes mistake on low class votes and thereby use votes as indicators of confidence. Then, we propose an application for generic self-learning where the threshold is based on the bound minimization.
\subsection{Transductive conditional risk} \label{sec:tr-bound-theory}
At first, we show how to upper bound the risk evaluated conditionally to the values of the true and the predicted class. Given a classifier $h$, for each class pair $(i,j)\in\{1,\dots,K\}^2$ such that $i\not= j,$ the \emph{transductive conditional risk} is defined as follows: $$
R_\mathcal{U}(h,i,j) := \frac{1}{u_i} \sum_{\mathbf{x}\in \mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x}) \I{h(\mathbf{x}) = j}, $$
where $u_i = \sum_{\mathbf{x}\in \mathrm{X}_{\sss\mathcal{U}}} P(Y=i|X=\mathbf{x})$ is the expected number of unlabeled observations from the class $i\in\{1,\ldots,K\}$. The value of $R_\mathcal{U}(h,i,j)$ indicates the expected proportion of unlabeled examples that are classified to the class $j$ being from the class $i$. We call $R_\mathcal{U}(B_Q,i,j)$ as the \emph{transductive Bayes conditional risk}. In the similar way, the \emph{transductive Gibbs conditional risk} is defined for all $(i,j)\in\{1,\dots,K\}^2,\ i\neq j$ by: $$ R_\mathcal{U}(G_Q,i,j) := \E_{h\sim Q} R_\mathcal{U}(h,i,j). $$ Although the Gibbs classifier is stochastic, its error is defined in expectation over $Q$. In other words, the Gibbs conditional risk represents the $Q$-weighted average conditional risk of hypotheses $h\in\mathcal{H}$.
In addition, we define the transductive \textit{joint} Bayes conditional risk for a threshold vector $\bm{\theta}\in[0,1]^K$, for $(i,j)\in\{1,\dots,K\}^2,\ i\neq j$, as follows: \begin{align}
R_{\mathcal{U}\wedge\bm{\theta}}(B_Q,i,j) &:= \frac{1}{u_i} \sum_{\mathbf{x}\in \mathrm{X}_{\sss\mathcal{U}}} P(Y=i|X=\mathbf{x}) \I{B_Q(\mathbf{x}) = j}\I{v_Q(\mathbf{x},j)\geq \theta_j}. \label{def:joint-cond-risk} \end{align} If the Bayes classifier makes mistakes, i.e., outputs the class $j$ when the true class is $i$, on the examples with low values of $v_Q(\mbf{x},j)$, then the joint risk computes the probability to make the conditional error on confident observations when a large enough $\theta_{j}$ is set with respect to the distribution of $v_Q(\mbf{x},j)$.
The following Lemma \ref{lem:connection-Gibbs-Bayes-multi} connects the conditional Gibbs risk and the joint Bayes conditional risk by considering a conditional Bayes error regarding a certain class vote.
\begin{lem} \label{lem:connection-Gibbs-Bayes-multi}
For $c\in\{1,\ldots,K\}$, let $\Gamma_{c} = \{\gamma_{c}\in[0,1]|\ \exists\ \mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}: \gamma_{c} = v_Q(\mathbf{x},c)\}$ be the set of unique votes for the unlabeled examples to the class $c$. Let enumerate its elements such that they form an ascending order: \[ \gamma^{(1)}_c \leq \gamma^{(2)}_c \leq \dots \leq \gamma^{(N_c)}_c, \]
where $N_c := |\Gamma_{c}|$.
Denote $b_{i,j}^{(t)} := \frac{1}{u_i}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}} P(Y=i|X=\mathbf{x}) \I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)=\gamma^{(t)}_j}$. \\Then, for all $(i,j) \in \{1,\ldots,K\}^2$: \begin{align} R_\mathcal{U}(G_Q,i,j) &\geq K_{i,j}:=\sum_{t=1}^{N_j} b^{(t)}_{i,j}\gamma^{(t)}_j,
\label{eq:lemma:gibbs:multi}\\ R_\mathcal{U\wedge\bm{\theta}}(B_Q,i,j) &= \sum_{t=k_j+1}^{N_j} b_{i,j}^{(t)},\label{eq:lemma:bayes:multi} \end{align} where
$k_j = \max\{t|\gamma^{(t)}_j< \theta_j\}$ with $\max(\emptyset)=0$ by convention. \end{lem} The proof is provided in Appendix \ref{AppendixProofLemma}.
Following Lemma \ref{lem:connection-Gibbs-Bayes-multi}, we derive a bound on the Bayes conditional risk using the class vote distribution.
\begin{thm} \label{thm:tr-bound-bayes-multi} Let $B_Q$ be the $Q$-weighted majority vote classifier defined by Eq. \eqref{eq:Bayes-classifier-multi-again}. Then for any $Q$, for all $\boldsymbol{\theta}\in[0,1]^K$, for all $(i,j) \in \{1,\ldots,K\}^2$ we have:
\begin{align} R_{\mathcal{U}\wedge\bm{\theta}}(B_Q,i,j) &\leq \inf_{\gamma\in[\theta_j,1]}\left\{I^{(\leq,<)}_{i,j}(\theta_j, \gamma) + \frac{1}{\gamma}\floor*{K_{i,j}-M_{i,j}^<(\gamma)+M_{i,j}^<(\theta_j)}_+\right\},\label{eq:tr-bound-joint-bayes-multi}\tag{TB$_{i,j}$} \end{align} where \begin{itemize}
\item $K_{i,j} = \frac{1}{u_i} \sum_{\mathbf{x}\in X_\mathcal{U}} P(Y=i|X=\mathbf{x})v_Q(\mathbf{x},j)\I{B_Q(\mathbf{x})=j}$ is the transductive Gibbs conditional risk evaluated on the examples for which the majority vote class is $j$,
\item $I^{(\triangleleft_1,\triangleleft_2)}_{i,j}(s_1, s_2) = \frac{1}{u_i}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{s_1\triangleleft_1 v_Q(\mathbf{x},j) \triangleleft_2 s_2}, (\triangleleft_1,\triangleleft_2)\in\{<,\leq\}^2$ is the expected proportion of unlabeled examples from the class $i$ with $v_Q(\mbf{x}, j)$ in interval $[\theta_j, \gamma)$,
\item $M_{i,j}^<(s) = \frac{1}{u_i} \sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})v_Q(\mathbf{x},j)\I{v_Q(\mathbf{x},j)<s}$, is the average of $j$-votes in the class $i$ that less than $s$. \end{itemize}
\end{thm} \begin{proof}
We would like to find an upper bound for the joint Bayes conditional risk. Hence, for all $(i,j) \in \{1,\ldots,K\}^2$, for all $\bm{\theta}\in[0,1]^K$, we consider the case when the mistake is maximized. Then, using Lemma \ref{lem:connection-Gibbs-Bayes-multi}: \begin{equation} \label{eq:R_U_t_B_i_j} R_\mathcal{U\wedge\bm{\theta}}(B_Q,i,j) = \sum_{t=k_j+1}^{N_j} b_{i,j}^{(t)} \leq \max_{b^{(1)}_{i,j},\dots, b^{(N_j)}_{i,j}} \sum_{t=k_j+1}^{N_j} b_{i,j}^{(t)}, \end{equation}
with $k_j=\max\{t|\gamma^{(t)}_j< \theta_j\}\I{\{t|\gamma^{(t)}_j< \theta_j\} \not= \emptyset}$.
Let $B^{(t)}_{i,j}=\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y\!=\!i|X\!=\!\mathbf{x})\I{v_Q(\mathbf{x},j)=\gamma^{(t)}_j}/u_i$. Then, it can be noticed that $0 \leq b_{i,j}^{(t)} \leq B^{(t)}_{i,j}$. Remember that $K_{i,j}$ can also be written as $\sum_{t=1}^{N_j} b^{(t)}_{i,j}\gamma^{(t)}_j$. Hence the bound defined by Eq. \eqref{eq:R_U_t_B_i_j} should satisfy the following linear program :
\begin{align} \max_{b^{(1)}_{i,j},\dots, b^{(N_j)}_{i,j}}\ &\sum_{t=k_j+1}^{N_j} b^{(t)}_{i,j}\label{eq:linear-program-multi}\\ \text{s.t. }\ \ \ &\forall t,\ 0 \leq b_{i,j}^{(t)} \leq B^{(t)}_{i,j} \text{ and }\sum_{t=1}^{N_j} b^{(t)}_{i,j}\gamma^{(t)}_j= K_{i,j}\nonumber. \end{align} The solution of \eqref{eq:linear-program-multi} can be solved analytically and it is attained for: \begin{equation} \label{eq:th:5}
b^{(t)}_{i,j}=
\min\left(B^{(t)}_{i,j},\floor*{\frac{1}{\gamma^{(t)}_j}(K_{i,j} -\sum_{k<w<t} \gamma^{(w)}_j B^{(w)}_{i,j})}_+\right)\I{t\leq k_j}.
\end{equation} For the sake of a better presentation, the proof of this solution is deferred to the appendix \ref{AppendixProofLemma}, Lemma~\ref{lem:sol-lin-prog}. Further, we can notice that, for all $(i,j) \in \{1,\ldots,K\}^2$, $$\sum_{k_j<w<t} \gamma^{(w)}_j B^{(w)}_{i,j}=M_{i,j}^<(\gamma_{j}^{(t)})-M_{i,j}^<(\theta_j).$$
Let $p=\max\{t|K_{i,j}-M_{i,j}^<(\gamma_j^{(t)})+M_{i,j}^<(\theta_j)>0\}$. Then, Eq. \eqref{eq:th:5} can be re-written as follows: \begin{equation} \label{eq:th:6} b^{(t)}_{i,j} = \begin{cases}
0 & t\leq k_j\\
B^{(t)}_{i,j} & k_j+1\leq t< p\\
\frac{1}{\gamma_j^{(p)}}(K_{i,j}-M_{i,j}^<(\gamma_j^{(p)})+M_{i,j}^<(\theta_j)) & t=p\\
0 & t>p. \end{cases} \end{equation}
Notice that $\sum_{t=k_j+1}^{p-1} B_{i,j}^{(t)} = I_{i,j}^{(\leq,<)}(\theta_j, \gamma_j^{(p)})$. Using this fact as well as Eq. \eqref{eq:th:6}, we infer: \[
R_\mathcal{U\wedge\bm{\theta}}(B_Q,i,j) \leq I_{i,j}^{(\leq,<)}(\theta_j, \gamma_j^{(p)}) + \frac{1}{\gamma_j^{(p)}}(K_{i,j}-M_{i,j}^<(\gamma_j^{(p)})+M_{i,j}^<(\theta_j)). \] Consider the function $$\gamma \mapsto U_{i,j}(\gamma) := I_{i,j}^{(\leq,<)}(\theta_j, \gamma) + \frac{1}{\gamma}\floor*{K_{i,j}-M_{i,j}^<(\gamma)+M_{i,j}^<(\theta_j)}_+.$$ To prove the theorem, it remains to verify that, for all $(i,j) \in \{1,\ldots,K\}^2$, for all $\gamma \in[\theta_j,1],\ U_{i,j}(\gamma_j^{(p)})\leq U_{i,j}(\gamma)$. For this, consider $\gamma_j^{(w)}$ with $w\in\{1,\dots,N_j\}$.
If $w > p$, then $U_{i,j}(\gamma_j^{(p)})\leq I_{i,j}^{(\leq,\leq)}(\theta_j, \gamma)\leq U_{i,j}(\gamma_j^{(w)}).$
If $w < p$, then \begin{align*}
U_{i,j}(\gamma_j^{(p)}) - U_{i,j}(\gamma_j^{(w)})=& \sum_{t=w}^{p} b^{(t)}_{i,j} - \frac{1}{\gamma_j^{(w)}}\left(K_{i,j}-M_{i,j}^<(\gamma_j^{(w)})+M_{i,j}^<(\theta_j)\right)\\
=& \sum_{t=w}^{p} b^{(t)}_{i,j} - \frac{1}{\gamma_j^{(w)}}\left(\sum_{t=k+1}^{p} b^{(t)}_{i,j}\gamma_j^{(t)} - \sum_{t=k+1}^{w-1} \gamma^{(t)}_j b^{(t)}_{i,j}\right) \\
=& \frac{1}{\gamma_j^{(w)}}\left(\sum_{t=w}^{p} b^{(t)}_{i,j}\gamma_j^{(w)} - \sum_{t=w}^{p} b^{(t)}_{i,j}\gamma_j^{(t)}\right)\leq 0. \end{align*} which completes the proof. \end{proof} Following this result, a transductive bound for the joint Bayes conditional risk can be found by arranging the class votes in an ascending order and considering the linear program \eqref{eq:linear-program-multi}, where the connection with the Gibbs classifier is used as a linear constraint. Furthermore, as the bound is the infimum of the function $U_{i,j}$ on the interval $[\theta_j,1]$ it can be computed in practice without solving the linear program explicitly.
When $\theta_j=0$, a bound over the transductive Bayes conditional risk is directly obtained from \eqref{eq:tr-bound-joint-bayes-multi} by noticing that $M_{i,j}^<(0) = 0$ in this case: \begin{equation} \label{eq:tr-bound-bayes-multi} R_\mathcal{U}(B_Q,i,j)\leq \inf_{\gamma\in[0,1]}\left\{I^{(\leq,<)}_{i,j}(0, \gamma) + \frac{1}{\gamma}\floor*{K_{i,j}-M_{i,j}^<(\gamma)}_+\right\}. \end{equation}
We note that in the binary case \citep{Amini:2008}, the transductive Gibbs risk used inside the linear program can be bounded either by the PAC-Bayesian bound \citep{Derbeko:2004,Begin:2014} or by 1/2 (the worst possible error of the binary classifier), which allows to compute the transductive bound. In the multi-class case, the bound can be evaluated only by approximating the posterior probabilities. Once we estimate the posterior probability, $K_{i,j}$ and the transductive conditional Gibbs risk are also directly approximated.
\subsection{Transductive confusion matrix and transductive error rate}
In this section, based on Theorem \ref{thm:tr-bound-bayes-multi}, we derive bounds for two other error measures: the \textit{error rate} and the \textit{confusion matrix} \citep{Morvant:2012:ICML}. We define the transductive error rate and the transductive \emph{joint} error rate of the Bayes classifier $B_Q$ over the unlabeled set $\mathrm{X}_{\sss\mathcal{U}}$ given a vector $\bm{\theta} = (\theta_c)_{c=1}^K\in [0,1]^K$, as: \begin{align}
R_{\sss\mathcal{U}}(B_Q) &:= \frac{1}{u}\sum_{\mathbf{x}\in \mathrm{X}_{\sss\mathcal{U}}}\sum_{\substack{{c\in\{1,\dots,K\}}\\{c\neq B_Q(\mbf{x})}}}P(Y=c|\mbf{X}=\mbf{x}), \nonumber\\
R_{\mathcal{U}\wedge\bm{\theta}}(B_Q) &:= \frac{1}{u} \sum_{\mathbf{x}\in \mathrm{X}_{\sss\mathcal{U}}}
\sum_{\substack{{c\in\{1,\dots,K\}}\\{c\neq B_Q(\mbf{x})}}} P(Y=c|X=\mathbf{x})\I{v_Q(\mathbf{x}, B_Q(\mathbf{x}))\geq \theta_{B_Q(\mathbf{x})}}. \label{def:joint-bayes-err} \end{align}
Then, we define the \emph{transductive joint Bayes confusion matrix} for $\bm{\theta}\in[0,1]^K$, and $(i,j)\in\{1,\dots,K\}^2$, as follows: \begin{equation*}
\left[\mathbf{C}^\mathcal{U\wedge\bm{\theta}}_{h}\right]_{i,j} = \begin{cases}0 & i=j,\\ R_{\mathcal{U}\wedge\bm{\theta}}(h,i,j) & i\not=j.\end{cases} \end{equation*} The following proposition links the error rate with the joint confusion matrix:
\begin{prop} \label{rmk:joint-err-connection-conf-matrix}
Let $B_Q$ be the majority vote classifier. Given a vector $\bm{\theta}\in [0,1]^K$, for $\mathbf{p} := \{u_i/u\}_{i=1}^K$, where $u_i = \sum_{\mathbf{x}\in \mathrm{X}_{\sss\mathcal{U}}} P(Y=i|X=\mathbf{x})$, we have:
\begin{align} R_{\mathcal{U}\wedge\bm{\theta}}(B_Q) = \norm{\T{\left(\mathbf{C}^\mathcal{U\wedge\bm{\theta}}_{B_Q}\right)}\, \mathbf{p}}_1. \label{errorConfMatrixJoint} \end{align}
\end{prop} \begin{proof}
To prove Eq. \eqref{errorConfMatrixJoint}, combine the definition of transductive joint Bayes conditional risk given in Eq. \eqref{def:joint-cond-risk} and Eq. \eqref{def:joint-bayes-err} as follows: \begin{align*} R_{\mathcal{U}\wedge\bm{\theta}}(B_Q)
&= \frac{1}{u} \sum_{i=1}^K \sum_{\substack{j=1\\j\not= i}}^K \sum_{\mathbf{x}\in \mathrm{X}_{\sss\mathcal{U}}} P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)\geq \theta_j} \\
&= \sum_{i=1}^K \frac{u_i}{u} \sum_{\substack{j=1\\j\not= i}}^K R_{\mathcal{U}\wedge\bm{\theta}}(B_Q,i,j) = \norm{\T{\left(\mathbf{C}^\mathcal{U\wedge\bm{\theta}}_{B_Q}\right)}\, \mathbf{p}}_1. \end{align*} \end{proof}
From Theorem \ref{thm:tr-bound-bayes-multi}, we derive corresponding transductive bounds for the confusion matrix norm and the error rate of the Bayes classifier. To simplify notations, we introduce a matrix $\mathbf{U}_{\bm{\theta}}$ of size $K\times K$ with zeros on the main diagonal and the following $ (i,j)$-entries, $i \neq j$: \[ \left[\mathbf{U}_{\bm{\theta}}\right]_{i,j}:= \inf_{\gamma\in[\theta_j,1]}\left\{I^{(\leq,<)}_{i,j}(\theta_j, \gamma) + \frac{1}{\gamma}\floor*{(K_{i,j}-M_{i,j}^<(\gamma)+M_{i,j}^<(\theta_j))}_+\right\}, \] which corresponds to the transductive bound proposed in Theorem \ref{thm:tr-bound-bayes-multi}. \begin{cor} \label{cor:matrix-bound} For all $\boldsymbol{\theta}\in[0,1]^K$, we have: \begin{equation} \label{eq:cor-conf} \mnorm{\mathbf{C}^{\mathcal{U}\wedge\bm{\theta}}_{B_Q}} \leq \mnorm{\mathbf{U}_{\bm{\theta}}}.
\end{equation} Moreover, we have the following bound:
\begin{equation} \label{eq:cor-err} R_{\sss\mathcal{U}\wedge\bm{\theta}}(B_Q) \leq \norm{\T{\mathbf{U}_{\bm{\theta}}}\, \mathbf{p}}_1.
\end{equation}
where $\|.\|$ is the spectral norm; and $\mathbf{p} = \{u_i/u\}_{i=1}^K$, with $u_i = \sum_{\mathbf{x}\in \mathrm{X}_{\sss\mathcal{U}}} P(Y=i|X=\mathbf{x})$. \end{cor}
\begin{proof}
The confusion matrix $\mathbf{C}^{\mathcal{U}\wedge\bm{\theta}}_{B_Q}$ is always non-negative, and from Theorem \ref{thm:tr-bound-bayes-multi}, each of its entries is smaller than the corresponding entry of $\mathbf{U}_{\bm{\theta}}$. Hence, from the property of spectral norm for two positive matrices $\mathbf{A}$ and $\mathbf{B}$~: \[
\mathbf{0}_{K,K}\preceq \mathbf{A} \preceq \mathbf{B} \Rightarrow \|\mathbf{A}\|\leq \|\mathbf{B}\|, \] where $\mathbf{A} \preceq \mathbf{B}$ denotes that each element of $\mathbf{A}$ is smaller than the corresponding element of $\mathbf{B}$, we deduce Eq. \eqref{eq:cor-conf}.
With the same computations, we observe the following inequality: \[ \T{\left(\mathbf{C}^{\mathcal{U}\wedge\bm{\theta}}_{B_Q}\right)}\, \mathbf{p} \leq \T{\mathbf{U}_{\bm{\theta}}}\, \mathbf{p}. \] Elements of the left vector are non-negative. Hence the inequality holds for the $\ell_1$-norm, and taking into account Proposition \ref{rmk:joint-err-connection-conf-matrix} we infer: \[ R_{\mathcal{U}\wedge\bm{\theta}}(B_Q) = \norm{\T{\left(\mathbf{C}^{\mathcal{U}\wedge\bm{\theta}}_{B_Q}\right)}\, \mathbf{p}}_1 \leq \norm{\T{\mathbf{U}_{\bm{\theta}}}\, \mathbf{p}}_1. \] \end{proof}
Note that the transductive bound of the Bayes error rate is obtained from Eq. \eqref{eq:cor-err} by taking $\bm\theta$ as the zero vector $\mathbf{0}_K$: \begin{equation}
\label{eq:TB}\tag{TB}
R_{\mathcal{U}}(B_Q) \leq \norm{\T{\mathbf{U}_{\mbf{0}_K}}\, \mathbf{p}}_1. \end{equation}
\subsection{Tightness Guarantees} In this section, we assume that the Bayes classifier makes most of its error on unlabeled examples with a low prediction vote, i.e., class votes can be considered as indicators of confidence. In the following proposition, we show that the bound becomes tight under certain conditions. We remind that $\Gamma_{j}=\{\gamma_j^{(t)}\}$ is the set of unique votes for the unlabeled examples to the class $j$, and $b_{i,j}^{(t)}$ corresponds to the Bayes conditional risk on the examples with the vote $\gamma_j^{(t)}$ (see Lemma \ref{lem:connection-Gibbs-Bayes-multi} for more details).
\begin{prop} \label{prop:tight-bayes-multi}
Let $\Gamma_j^\tau:=\{\gamma_j^{(t)}\in\Gamma_j|b^{(t)}_{i,j} > \tau\}$, where $\tau\in[0,1]$ is a given threshold. If there exists a lower bound $C\in[0,1]$ such that for all $\gamma\in\Gamma_j^\tau$:
\begin{align}
\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)<\gamma} &\geq C \sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{v_Q(\mathbf{x},j)<\gamma}, \label{prop-multi:initial-cond} \end{align} then, the following inequality holds: \begin{equation*}
\left[\mathbf{U}_{\mathbf{0}_K}\right]_{i,j} - R_\mathcal{U}(B_Q,i,j) \leq \frac{1-C}{C}R_{\mathcal{U}}(B_Q,i,j) + r_{i,j}\left(\frac{1}{\gamma^*_j}-1\right), \end{equation*}
where \begin{itemize}
\item $\gamma^*_j := \sup\{\gamma_j^{(t)}\in\Gamma_j^\tau\}$ is the highest vote which satisfies $b^{(t)}_{i,j} > \tau$, and
\item $r_{i,j} := \sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})v_Q(\mathbf{x},j)\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)>\gamma^*_j}/u_i$ corresponds to the average of $j$-votes in the class $i$ that greater than $\gamma^*_j$ and on which the Bayes classifier makes the conditional mistake. \end{itemize} \end{prop}
\begin{proof}
First, it can be proved that for all $\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}$, for all $(i,j)\in \{1,\ldots,K\}^2$, the following inequality holds: \begin{multline} \label{eq:prop-multi:1}
R_\mathcal{U}(B_Q,i,j) \geq \frac{1}{u_i}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)<\gamma^*} \\+ \frac{1}{\gamma^*}\floor*{\floor{K_{i,j}-M_{i,j}^<(\gamma^*)}_+ - r_{i,j}}_+ + r_{i,j}, \end{multline}
where $\gamma^* := \sup\{\gamma\in\Gamma_j|\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)=\gamma}/u_i> \tau\}$. We prove this result in Lemma \ref{lem:lem-for-proposition} in Appendix. Now, taking into account Eq. \eqref{eq:prop-multi:1} and Eq. \eqref{prop-multi:initial-cond} we deduce the following: \begin{align}
R_\mathcal{U}(B_Q,i,j) \geq & \frac{C}{u_i}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{v_Q(\mathbf{x},j)<\gamma^*} + \frac{1}{\gamma^*}\floor*{\floor{K_{i,j}-M_{i,j}^<(\gamma^*)}_+ - r_{i,j}}_+ + r_{i,j} \nonumber\\ =& C\,I^{(\leq,<)}_{i,j}(0, \gamma^*) + \frac{1}{\gamma^*}\floor*{\floor{K_{i,j}-M_{i,j}^<(\gamma^*)}_+ - r_{i,j}}_+ + r_{i,j}. \label{eq:prop-multi:4} \end{align} By definition of $\mathbf{U}_{\mathbf{0}_K}$ we have, for all $(i,j)\in \{1,\ldots,K\}^2$, \begin{equation} \label{eq:prop-multi:5} \left[\mathbf{U}_{\mathbf{0}_K}\right]_{i,j} \leq I^{(\leq,<)}_{i,j}(0, \gamma^*) + \frac{1}{\gamma^*}\floor*{K_{i,j}-M_{i,j}^<(\gamma^*)}_+. \end{equation} Subtracting Eq. \eqref{eq:prop-multi:4} from Eq. \eqref{eq:prop-multi:5} we obtain: \begin{multline*} \left[\mathbf{U}_{\mathbf{0}_K}\right]_{i,j} - R_\mathcal{U}(B_Q,i,j) \leq (1-C)I^{(\leq,<)}_{i,j}(0, \gamma^*) \\+ \frac{1}{\gamma^*}\left(\floor*{K_{i,j}-M_{i,j}^<(\gamma^*)}_+ - \floor*{\floor{K_{i,j}-M_{i,j}^<(\gamma^*)}_+ - r_{i,j}}_+\right)-r_{i,j}. \end{multline*} We can notice that for all $a,b\in\mathbb{R}^+:\ b-\floor{b-a}_+\leq a$. Then, we have: \begin{equation} \label{eq:prop-multi:6} \left[\mathbf{U}_{\mathbf{0}_K}\right]_{i,j} - R_\mathcal{U}(B_Q,i,j) \leq (1-C)I^{(\leq,<)}_{i,j}(0, \gamma^*) + r_{i,j}\left(\frac{1}{\gamma^*}-1\right). \end{equation} Also, from Eq. \eqref{eq:prop-multi:4} one can derive: \begin{align} I^{(\leq,<)}_{i,j}(0, \gamma^*)&\leq \frac{1}{C}\left(R_\mathcal{U}(B_Q,i,j) - \frac{1}{\gamma^*}\floor*{\floor{K_{i,j}-M_{i,j}^<(\gamma^*)}_+ - r_{i,j}}_+ - r_{i,j} \right) \leq \frac{R_\mathcal{U}(B_Q,i,j)}{C}\label{eq:prop-multi:7}. \end{align}
Taking into account Eq. \eqref{eq:prop-multi:6} and Eq. \eqref{eq:prop-multi:7}, we infer: \[ \left[\mathbf{U}_{\mathbf{0}_K}\right]_{i,j} - R_\mathcal{U}(B_Q,i,j) \leq \frac{1-C}{C} R_\mathcal{U}(B_Q,i,j) + r_{i,j}\left(\frac{1}{\gamma^*}-1\right). \] \end{proof}
This proposition states that if Eq. \eqref{prop-multi:initial-cond} holds, the difference between the transductive Bayes conditional risk and its upper bound does not exceed an expression that depends on a constant $C$ and a threshold $\tau$. When the majority vote classifier makes most of its mistake for the class $j$ on observations with a low value of $v_Q(\mathbf{x}, j)$, with a reasonable choice of $\tau$, $r_{i,j}$ and $\gamma^*_j$ are decreasing. This also implies that Eq. \eqref{prop-multi:initial-cond} accepts a high value $C$ (close to 1) and the bound will be tighter. The closer our framework to the deterministic one, the closer $r_{i,j}$ will be to 0 ( in the deterministic case, $\tau$ can be set to 0, so $r_{i,j}$ will be 0), so the bound becomes tight. Although our bound is tight only under the condition of making mistakes on low prediction votes, the assumption is reasonable from the theoretical point of view, since if for some observation the Bayes classifier gives a relatively high vote to the class $j$, we expect that the observation is most probably from this class and not from the class $i$. From the practical point of view, this assumption requires the learning model to be well calibrated \citep{Gebel:2009}.
\subsection{Multi-class Self-learning Algorithm} \label{sec:msla}
In this section, we describe an application of results obtained in Section \ref{sec:tr-bound-theory} for learning on partially-labeled data. For this, we consider a self-learning algorithm \citep{Amini:15}, which is a semi-supervised approach that performs augmentation of the labeled set by pseudo-labeling unlabeled examples.
The algorithm starts from a supervised base classifier initially trained on available labeled examples. Then, it iteratively assigns pseudo-labels at each iteration to those unlabeled examples that have a confidence score above a certain threshold. The pseudo-labeled examples are then included in the training set, and the classifier is retrained. The process is repeated until no examples for pseudo-labeling are left.
The central question of applying the self-learning algorithm in practice is how to choose the threshold. Intuitively, the threshold can manually be set to a very high value, since only examples with a very high degree of confidence will be pseudo-labeled in this case. However, the confidence measure is biased by the small labeled set, so every iteration of the self-learning may still induce an error and shift the boundary in the wrong direction. In addition, the fact that a large number of iterations makes the algorithm computationally expensive drives us to choose the threshold carefully.
{To overcome this problem, we extend the strategy proposed by \citet{Amini:2008} to the multi-class setting. We consider the majority vote as the base classifier and the prediction vote as an indicator of confidence. Given a threshold vector $\bm{\theta}$,} we introduce the \emph{conditional Bayes error rate} $R_\mathcal{U|\boldsymbol{\theta}}(B_Q)$, defined in the following way: \begin{equation} \label{eq:cond-bayes-error}
R_{\mathcal{U}|\boldsymbol{\theta}}(B_Q) := \frac{R_{\mathcal{U}\wedge\boldsymbol{\theta}}(B_Q)}{\pi(v_Q(\mathbf{x},k)\geq\theta_k)}, \end{equation}
where $\pi(v_Q(\mathbf{x},k)\geq\theta_k):= \sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}\mathds{1}_{v_Q(\mathbf{x},k)\geq\theta_k}/u$ and $k := B_Q(\mathbf{x})$. The numerator reflects the proportion of mistakes on the unlabeled set when the threshold is equal to $\boldsymbol{\theta}$, whereas the denominator computes the proportion of unlabeled observations with the vote larger than the threshold for the predicted class. {Thus, we propose to find the threshold that yields the minimal value of $R_\mathcal{U|\boldsymbol{\theta}}(B_Q)$, making a trade-off between the error we induce by pseudo-labeling and the number of pseudo-labeled examples. In Algorithm \ref{alg:MSLA} we summarize our algorithm, which is further denoted by \texttt{MSLA}\footnote{The code source of the algorithm can be found at \url{https://github.com/vfeofanov/trans-bounds-maj-vote}.}. }
To evaluate the transductive error, we bound the numerator of Eq. \eqref{eq:cond-bayes-error} by Corollary \ref{cor:matrix-bound}. However, the bound can practically be computed only with assumptions, since the posterior probabilities $P(Y=c|X=\mathbf{x})$ for unlabeled examples are not known. In this work, we approximate the posterior $P(Y=c|X=\mathbf{x})$ by $v_Q(\mathbf{x},c)$ of the base classifier trained on labeled examples only (the initial step of \texttt{MSLA}). Although this approximation is optimistic, by formulating the bound as probabilistic we keep some chances for other classes so the error of the supervised classifier can be smoothed. However, it must be borne in mind that the hypothesis space should be diverse enough so that the entropy of $(v_Q(\mbf{x}, c))_{c=1}^K$ would not be always zero, and the errors are made mostly on low prediction votes. In our experiments, as the base classifier we use the random forest \citep{Breiman:2001} that aggregates predictions from trees learned on different bootstrap samples. In Appendix \ref{sec:exp-prob-estim}, we validate the proposed approximation by empirically comparing it with the case when the posterior probabilities are set to $1/K$, i.e., when we treat all classes as equally probable.
\begin{algorithm}[ht!] \caption{Multi-class self-learning algorithm\,(MSLA)} \label{alg:MSLA} \begin{algorithmic} \State \State \textbf{Input:} \\ Labeled observations $\mathrm{Z}_{\mathcal{L}}$
\\ Unlabeled observations $\mathrm{X}_{\sss\mathcal{U}}$ \State \textbf{Initialisation:} \\A set of pseudo-labeled instances, $\mathrm{Z}_{\mathcal{P}}\leftarrow \emptyset$ \\A classifier $B_Q$ trained on $\mathrm{Z}_{\mathcal{L}}$ \Repeat
\State \textbf{1.} Compute the vote threshold $\bm{\theta^*}$ that minimizes the conditional Bayes error rate:
\begin{equation*}
\bm{\theta}^* = \argmin_{\bm{\theta}\in(0,1]^K} R_{\mathcal{U}|\bm{\theta}}(B_Q) .\tag{$\star$}
\end{equation*}
\State \textbf{2.} $S \leftarrow\{(\mathbf{x},y')|\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}};[v_Q(\mathbf{x},y')\geq\theta^*_{y'}]\wedge [ y'= \argmax_{c\in}v_Q(\mathbf{x},c) ]\}$
\State \textbf{3.} $\mathrm{Z}_{\mathcal{P}}\leftarrow \mathrm{Z}_{\mathcal{P}} \cup S$, $\mathrm{X}_{\sss\mathcal{U}} \leftarrow \mathrm{X}_{\sss\mathcal{U}}\setminus S$
\State \textbf{4.} Learn a classifier $B_Q$ with the following loss function:
\[
\mathcal{L}(B_Q,Z_{\mathcal{L}},\mathrm{Z}_{\mathcal{P}}) = \frac{l+|\mathrm{Z}_{\mathcal{P}}|}{l}\mathcal{L}(B_Q,Z_{\mathcal{L}}) + \frac{l+|\mathrm{Z}_{\mathcal{P}}|}{|\mathrm{Z}_{\mathcal{P}}|}\mathcal{L}(B_Q,\mathrm{Z}_{\mathcal{P}})
\] \Until{$\mathrm{X}_{\sss\mathcal{U}}\text{ or }S \text{ are }\emptyset$} \State \textbf{Output:} The final classifier $B_Q$ \end{algorithmic} \end{algorithm}
Similarly to the work of \citet{Amini:2008}, in practice, to find an optimal $\bm{\theta}^*$ we perform a grid search over the hypercube $(0,1]^K$. The same algorithm is used for computing the optimal $\gamma^*$ that provides the value of an upper bound for the conditional risk (see Theorem \ref{thm:tr-bound-bayes-multi}). In contrast to the binary self-learning, the direct grid search in the multi-class setting costs $O\left(R^K\right)$, where $R$ is the sampling rate of the grid. As \begin{align*}
R_{\mathcal{U}|\boldsymbol{\theta}}(B_Q)
&= \sum_{j=1}^K\frac{R_{\mathcal{U}\wedge\boldsymbol{\theta}}^{(j)}(B_Q)}{\sum_{c=1}^K\frac{1}{u}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}\mathds{1}_{v_Q(\mathbf{x},c)\geq\theta_c}\mathds{1}_{B_Q(\mathbf{x})=c}} \leq \sum_{j=1}^K\frac{R_{\mathcal{U}\wedge\boldsymbol{\theta}}^{(j)}(B_Q)}{\frac{1}{u}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}\mathds{1}_{v_Q(\mathbf{x},j)\geq\theta_j}\mathds{1}_{B_Q(\mathbf{x})=j}}\nonumber\\ &\leq \sum_{j=1}^K\frac{R_{\mathcal{U}\wedge\boldsymbol{\theta}}^{(j)}(B_Q)}{\pi\{(v_Q(\mathbf{x},j)\geq\theta_j)\land (B_Q(\mathbf{x})=j)\}}\label{eq:paralell-upp-bound} \tag{$\ast$}, \end{align*} where $R_{\mathcal{U}\wedge\boldsymbol{\theta}}^{(j)}(B_Q)=\sum_{i=1}^K u_iR_{\mathcal{U}\wedge\boldsymbol{\theta}}(B_Q,i,j)/u$, the sum might be minimized term by term, tuning independently each component of $\boldsymbol{\theta}$. This replaces the $K$-dimensional minimization task by $K$ tasks of 1-dimensional minimization.
\section{Probabilistic C-Bound with Imperfect Labels} \label{sec:c-bound} The transductive bound \eqref{eq:TB} can be regarded as a first-order bound, since it is linearly dependent on the classifier' votes, so it does not take into account the correlation between hypotheses. In addition, despite its application for minimization of the error induced by self-learning, the obtained pseudo-labels may be still erroneous, and we do not know how to evaluate the classification error in this noisy case. In this section, we overcome these two issues by deriving a new probabilistic C-bound in the presence of imperfect labels.
\subsection{C-Bound in the Probabilistic Setting} \citet{Lacasse:2007} proposed to upper bound the Bayes error by taking into account the mean and the variance of the prediction margin, which, we recall Eq.~\eqref{def:margin}, is defined as $v_Q(\mbf{x},y) - \max_{\substack{{c\in\mathcal{Y}}\setminus\{y\}}} v_Q(\mbf{x},c)$. A similar result was obtained in a different context by \citet{Breiman:2001}. \citet{Laviolette:2017} extended this bound to the multi-class case.
Below, we derive their C-bound in the probabilistic setting. Now, we consider the \emph{generalization error} as an error measure, which is defined in the probabilistic setting
as follows: \begin{align*} R(B_Q) &
:=\E_{P(\mbf{X})}\sum_{\substack{{c\in\{1,\dots,K\}}\\{c\neq B_Q(\mbf{x})}}}P(Y=c|\mbf{X}=\mbf{x})
=\E_{P(\mbf{X})}[1-P(Y=B_Q(\mbf{x})|\mbf{X}=\mbf{x})].
\end{align*}
\begin{thm} \label{thm:prob-cbound}
Let $M$ be a random variable such that $[M|\mbf{X}=\mbf{x}]$ is a discrete random variable that is equal to the margin $M_Q(\mbf{x}, c)$ with probability $P(Y=c|\mbf{X}=\mbf{x})$, $c=\{1,\dots,K\}$.
Let $\mu^{M}_1$ and $\mu^{M}_2$ be the first and the second statistical moments of the random variable $M$, respectively. Then, for all choice of $Q$ on a hypothesis space $\mathcal{H}$, and for all distributions $P(\mbf{X})$ over $\mathcal{X}$ and $P(Y|\mbf{X})$ over $\mathcal{Y}$, such that $\mu^M_1>0$, we have: \begin{align}
\label{eq:prob-cbound}\tag{CB}
R(B_Q) \leq 1 - \frac{(\mu^M_1)^2}{\mu^M_2}. \end{align} \end{thm} \begin{proof}
At first, we show that $R(B_Q) = P(M\leq 0)$.
For a fixed $\mbf{x}$, one get:
\begin{align*}
P(M\leq 0|\mbf{X}=\mbf{x}) = \sum_{c=1}^K P(Y=c|\mbf{X}=\mbf{x})\I{M_Q(\mbf{x},c)\leq 0} = \sum_{\substack{{c\in\{1,\dots,K\}}\\{c\neq B_Q(\mbf{x})}}}P(Y=c|\mbf{X}=\mbf{x}).
\end{align*}
Applying the total probability law, we obtain:
\begin{align}
P(M\leq 0) &= \int_{\mathcal{X}} P(M\leq 0|\mbf{X}=\mbf{x}) P(\mbf{X}=\mbf{x})\diff\mbf{x}= \E_{P(\mbf{X})} P(M\leq 0|\mbf{X}=\mbf{x}) = R(B_Q). \label{eq:bayes-risk-via-prob-margins}
\end{align}
By applying the Cantelli-Chebyshev inequality (Lemma \ref{lem:cantelli-chebyshev} in Appendix), we deduce:
\begin{align}
P(M\leq 0) &\leq \frac{\mu^M_2 - (\mu^M_1)^2}{\mu^M_2 - (\mu^M_1)^2 + (\mu^M_1)^2 } = 1 - \frac{(\mu^M_1)^2}{\mu^M_2}. \label{eq:prob-M-less-0-bound}
\end{align}
Combining Eq. \eqref{eq:bayes-risk-via-prob-margins} and Eq. \eqref{eq:prob-M-less-0-bound} gives the bound. \end{proof}
Thus, the probabilistic C-bound allows to bound the generalization error of the Bayes classifier when examples are provided with probabilistic labels. Note that when for every example, only one label is possible, the bound comes back to the usual deterministic case.
The main advantage of C-bound is the involvement of the second margin moment, which can be related to correlations between hypotheses' predictions \citep{Lacasse:2007}.
\subsection{Mislabeling Error Model} \label{sec:mislab-error-model} The self-learning algorithm, which was introduced in Section \ref{sec:msla}, supplies the unlabeled examples with pseudo-labels that are potentially erroneous. In this section, we consider a mislabeling error model to explicitly take into account this issue.
We consider an imperfect output $\hat{Y}$, which has a different distribution from the true output $Y$. The label imperfection is summarized through the \emph{mislabeling matrix} $\mbf{P}=(p_{j,c})_{1\leq j,c\leq K}$, defined by: \begin{align} \label{eq:mislab-model}
P(\hat Y=j|Y=c) &:= p_{j,c} \quad\forall(j,c)\in \{1,\dots,K\}^2,
\end{align}
where $\sum_{j=1}^K p_{j,c} = 1$. Additionally, we assume that $\hat Y$ does not influence the true class distribution: $P(\mbf{X}|Y, \hat Y) = P(\mbf{X}, Y)$. This implies that \begin{align}
\label{eq:mislab-prob-transformation}
{P(\hat Y=j|\mbf{X}=\mbf{x})=\sum_{c=1}^K p_{j,c}P(Y=c|\mbf{X}=\mbf{x}).} \end{align} {This class-related model is a common approach to deal with the label imperfection \citep{Chittineni:1980,Amini:2003,Natarajan:2013,Scott:2015}.}
At first, we derive a bound that connects the error of the true and the imperfect label in misclassifying a particular example $\mbf{x} \in \mathcal{X}$. We denote \begin{align*}
r(\mbf x) &= \sum_{\substack{{c\in\{1,\dots,K\}}\\{c\neq B_Q(\mbf{x})}}}P(Y=c|\mbf{X}=\mbf{x}), \qquad
\hat r(\mbf x) = \sum_{\substack{{c\in\{1,\dots,K\}}\\{c\neq B_Q(\mbf{x})}}}P(\hat Y=c|\mbf{X}=\mbf{x}). \end{align*} \begin{thm} \label{thm:one-ex-risk-mislab-bound} Let $\mbf{P}$ be the mislabeling matrix, and assume that $p_{i,i}> p_{i,j},\ \forall{i,j}\in\{1,\dots,K\}^2$. Then, for all choice of $Q$ on a hypothesis space $\mathcal{H}$ we have, for $\mbf x \in \mathcal X$, \begin{align}
r(\mbf{x}) \leq \frac{\hat{r}(\mbf{x})}{\delta(\mbf{x})}-\frac{1-\alpha(\mbf{x})}{\delta(\mbf{x})}, \label{eq:one-x-mislabel-ineq} \end{align} with $\delta(\mbf{x}):= p_{B_Q(\mbf{x}),B_Q(\mbf{x})}- \max_{j\in\mathcal{Y}\setminus\{B_Q(\mbf{x})\}}p_{B_Q(\mbf{x}),j}$ and $\alpha(\mbf{x}):=p_{B_Q(\mbf{x}),B_Q(\mbf{x})}$. \end{thm} \begin{proof} First, from the definition of $\hat{r}(\mbf{x})$ and applying Eq. \eqref{eq:mislab-prob-transformation} we obtain that
\begin{align}
\hat{r}(\mbf{x}) &= 1 - P(\hat Y=B_Q(\mbf{x})|\mbf X = \mbf{x}) = 1 - \sum_{j=1}^K p_{B_Q(\mbf{x}),j}P(Y=j|\mbf X = \mbf{x}) \nonumber\\
&=1 - p_{B_Q(\mbf{x}),B_Q(\mbf{x})}P(Y=B_Q(\mbf{x})|\mbf X = \mbf{x})- \sum_{\substack{{j=1}\\{j\neq B_Q(\mbf{x})}}}^K p_{B_Q(\mbf{x}),j}P(Y=j|\mbf X = \mbf{x}) \nonumber
\end{align}
One can notice that
\begin{align*}
\sum_{\substack{{j=1}\\{j\neq B_Q(\mbf{x})}}}^K p_{B_Q(\mbf{x}),j}P(Y=j|\mbf X = \mbf{x}) &\leq \max_{j\in\mathcal{Y}\setminus\{B_Q(\mbf{x})\}}p_{B_Q(\mbf{x}),j}\sum_{\substack{{j=1}\\{j\neq B_Q(\mbf{x})}}}^K P(Y=j|\mbf X = \mbf{x}) \\
&= \max_{j\in\mathcal{Y}\setminus\{B_Q(\mbf{x})\}}p_{B_Q(\mbf{x}),j}(1-P(Y=B_Q(\mbf{x})|\mbf X = \mbf{x})).
\end{align*}
Finally, we infer the following inequality:
\begin{align}
\hat{r}(\mbf{x}) &\geq (p_{B_Q(\mbf{x}),B_Q(\mbf{x})}-\max_{j\in\mathcal{Y}\setminus\{B_Q(\mbf{x})\}}p_{B_Q(\mbf{x}),j})(1-P(Y=B_Q(\mbf{x})|\mbf X = \mbf{x}))+1-p_{B_Q(\mbf{x}),B_Q(\mbf{x})} \nonumber\\
&= \delta(\mbf{x})r(\mbf{x})+1-\alpha(\mbf{x}). \label{eq:one-x-mislabel-last-ineq-proof} \end{align} Taking into account the assumption that $p_{i,i}> p_{i,j},\ \forall{i,j}\in\{1,\dots,K\}^2$, we deduce that $\delta(\mbf{X})> 0$, which concludes the proof. \end{proof} This theorem gives us insights on how the true error rate can be bounded given the error rate of the imperfect label and the mislabeling matrix. With the quantities $\delta(\mbf{x})$ and $\alpha(\mbf{x})$, we perform a correction of $\hat{r}(\mbf{x})$. Note that when there is no mislabeling, the left and right sides of Eq. \eqref{eq:one-x-mislabel-ineq} are equal, since $\alpha(\mbf{x})=1$ and $\delta(\mbf{x})=1$ in this case.
Note that this theorem holds also for a more general case when correction probabilities depend on the example $\mbf{x}$. In this case, all probabilities $p_{i,j}$ are replaced by $p^\mbf{x}_{i,j}:= P(\hat{Y}=i|Y=j,\mbf{X}=\mbf{x})$. Since it is harder to estimate $p^\mbf{x}_{i,j}$ compared to $p_{i,j}$, we stick to consider the class-related model described in Eq. \eqref{eq:mislab-prob-transformation}.
In the theorem, the mislabeling matrix is assumed given, while in practice it has to be estimated. Since the number of matrix entries grows quadratically with the increase of $K$, a direct estimation of the true posterior probabilities from Eq. \eqref{eq:mislab-prob-transformation} may be more affected by the estimation error than the bound itself as the latter needs to know only $2K$ entries. We give more details about estimation of the mislabeling matrix in Section \ref{sec:concl}.
The bound can be compared with a bound derived in \citet[Eq. (3.14), p. 284]{Chittineni:1980} for the optimal Bayes classifier (maximum a-posteriori rule). It is shown that $r(\mbf{x})\leq 1-\frac{1-\hat{r}(\mbf{x})}{\beta}$, where $ \beta=\max_{i=1,\dots, K}\left(\sum_{j=1}^K p_{i,j}\right)$. One can notice that the regularizer $\beta$ is constant with respect to $\mbf{x}$, so the penalization of the error rate $\hat{r}(\mbf{x})$ does not depend on the label the classifier predicts. Another limitation is that the bound assumes that the Bayes classifier is optimal.
The assumption of Theorem \ref{thm:one-ex-risk-mislab-bound} requires that the diagonal entries of the mislabeling matrix are the largest elements in their corresponding columns, which means that the imperfect label is reasonably correlated with the true label.
However, in practice, the assumption may not hold, so the theorem is not applicable. To overcome this, it can be relaxed by considering $\lambda>0$ such that $\lambda+\delta(\mbf{x})>0$, and so we get a bound for all choices of $Q$ on a hypothesis space $\mathcal{H}$: \begin{align}
r(\mbf{x}) \leq \frac{\hat{r}(\mbf{x})}{\lambda+\delta(\mbf{x})}-\frac{1-\lambda-\alpha(\mbf{x})}{\lambda+\delta(\mbf{x})}. \label{eq:one-x-mislabel-ineq-with-lam} \end{align} When $\delta(\mbf{x})$ is close to 0, it also avoids the bound to become arbitrarily large. The use of this bound is illustrated in Section \ref{sec:relax_bound} of Appendix.
\subsection{C-Bounds with Imperfect Labels}
Based on Theorem \ref{thm:one-ex-risk-mislab-bound}, we bound the generalization error $R(B_Q)$, which is the expectation of $r(\mbf{X})$.
By taking expectation in Eq. \eqref{eq:one-x-mislabel-ineq}, we obtain that \begin{align}
R(B_Q) = \E_{\mbf{X}} r(\mbf{X}) \leq \E_{\mbf{X}} \frac{\hat{r}(\mbf{X})}{\delta(\mbf{X})}-\E_{\mbf{X}}\frac{1-\alpha(\mbf{X})}{\delta(\mbf{X})}.
\label{eq:expect-from-one-x-risk} \end{align} One can see that for every $\mbf{x}$, $\hat{r}(\mbf{x})$ is multiplied by a positive weight $1/\delta(\mbf{X})>0$, so the first term of the right-hand side is a weighted generalization error of the imperfect label. To cope with this, we derive a weighted C-bound by proposing the next theorem. \begin{thm} \label{thm:w-cbound}
Let $\hat{M}$ be a random variable such that $[\hat{M}|\mbf{X}=\mbf{x}]$ is a discrete random variable that is equal to the margin $\hat{M}_Q(\mbf{x}, i)$ with probability $P(\hat{Y}=i|\mbf{X}=\mbf{x})$, $i=\{1,\dots,K\}$. Assume that every diagonal entry of the mislabeling matrix $\mbf{P}$ is the largest element in the corresponding column, i.e., $p_{i,i}> p_{i,j},\ \forall{i,j}\in\{1,\dots,K\}^2$. Then, for all choice of $Q$ on a hypothesis space $\mathcal{H}$, and for all distributions $P(\mbf{X})$ over $\mathcal{X}$ and $P(Y|\mbf{X})$ over $\mathcal{Y}$, we have: \begin{align} \label{eq:w-cbound}\tag{CBIL}
R(B_Q)\leq \psi_{\mbf{P}} - \frac{\left(\mu_1^{\hat M,{\mbf{P}}}\right)^2}{\mu_2^{\hat M,{\mbf{P}}}}, \end{align} if $\mu_1^{\hat M_{\mbf{P}}}>0$, where \begin{itemize}
\item $\psi_{\mbf{P}}:=\E_{\mbf{X}}\frac{\alpha(\mbf{X})}{\delta(\mbf{X})} $ with $\delta$ and $\alpha$ defined as in Theorem \ref{thm:one-ex-risk-mislab-bound},
\item $\mu_1^{\hat M,{\mbf{P}}}:=\int_{\R^{d+1}} \frac{m}{\delta(\mbf{x})} P(\hat M=m,\mbf{X}=\mbf{x})\diff\mbf{x}\diff m$ is the weighted 1st margin moment,
\item $\mu_2^{\hat M,{\mbf{P}}}:=\int_{\R^{d+1}} \frac{m^2}{\delta(\mbf{x})} P(\hat M=m,\mbf{X}=\mbf{x})\diff\mbf{x}\diff m$ is the weighted 2nd margin moment. \end{itemize} \end{thm} \begin{proof}
At first, let us introduce a normalization factor $\omega_{\sss\mbf{P}}$ defined as follows:
\begin{align*}
\omega_{\sss\mbf{P}}:=\E_{\mbf{X}}\frac{1}{\delta(\mbf{X})} =\int_{\R^{d+1}} \frac{P(\hat M=m,\mbf{X}=\mbf{x})}{\delta(\mbf{x})}\diff\mbf{x}\diff m.
\end{align*}
Remind that $\hat r(\mbf{x})=P(\hat{M}\leq 0|\mbf{X}=\mbf{x})$. Then, we can write:
\begin{align}
\E_{\mbf{X}}\frac{\hat{r}(\mbf{X})}{\delta(\mbf{X})}&=\int_{\R^d} \frac{1}{\delta(\mbf{x})}P(\hat M\leq 0|\mbf{X}=\mbf{x})P(\mbf{X}=\mbf{x})\diff \mbf{x}
=\int_{-\infty}^{0}\int_{\R^d} \frac{P(\hat M=m,\mbf{X}=\mbf{x})}{\delta(\mbf{x})} \diff\mbf{x}\diff m \nonumber\\
&=\omega_{\sss\mbf{P}}\int_{-\infty}^{0}\frac{\int_{\R^d} P(\hat M=m,\mbf{X}=\mbf{x})/\delta(\mbf{x})\diff\mbf{x}}{\int_{\R^{d+1}} P(\hat M=m,\mbf{X}=\mbf{x})/\delta(\mbf{x})\diff\mbf{x}\diff m}\diff m
=\omega_{\sss\mbf{P}} P(\hat M_\omega <0),
\label{eq:w-prob-margin-neg}
\end{align}
where the last equality is given by a random variable $\hat{M}_{\omega}$ coming from the density $f_{\omega}$ defined as the expression inside the integral in Eq. \eqref{eq:w-prob-margin-neg}.
We further notice that the weighted first and second moments can be represented as:
\begin{align*}
\mu_1^{\hat M, \mbf{P}} &= \int_{\R^{d+1}} \frac{m}{\delta(\mbf{x})} P(\hat M=m,\mbf{X}=\mbf{x})\diff\mbf{x}\diff m= \omega_{\sss\mbf{P}} \mu^{\hat M_\omega}_1,\\
\mu_2^{\hat M, \mbf{P}} &= \int_{\R^{d+1}} \frac{m^2}{\delta(\mbf{x})} P(\hat M=m,\mbf{X}=\mbf{x})\diff\mbf{x}\diff m= \omega_{\sss\mbf{P}} \mu^{\hat M_\omega}_2.
\end{align*}
From this, we also obtain that $var(M_\omega) = \left(\mu_2^{\hat M, \mbf{P}}/\omega_{\sss\mbf{P}}\right)-\left(\mu_1^{\hat M, \mbf{P}}/\omega_{\sss\mbf{P}}\right)^2$.
Then, using the Cantelli-Chebyshev inequality (Lemma \ref{lem:cantelli-chebyshev}) with $\lambda=\mu^{\hat M_f}_1=\mu_1^{\hat M, \mbf{P}}/\omega_{\sss\mbf{P}}$ we deduce the following inequality:
\begin{align}
P(\hat M_\omega < 0) &\leq \frac{\left(\mu_2^{\hat M, \mbf{P}}/\omega_{\sss\mbf{P}}\right)-\left(\mu_1^{\hat M, \mbf{P}}/\omega_{\sss\mbf{P}}\right)^2}{\left(\mu_2^{\hat M, \mbf{P}}/\omega_{\sss\mbf{P}}\right)-\left(\mu_1^{\hat M, \mbf{P}}/\omega_{\sss\mbf{P}}\right)^2 + \left(\mu_1^{\hat M, \mbf{P}}/\omega_{\sss\mbf{P}}\right)^2} =1-\frac{\left(\mu_1^{\hat M, \mbf{P}}\right)^2}{\omega_{\sss\mbf{P}}\mu_2^{\hat M, \mbf{P}}}.
\label{eq:bound-for-w-prob-neg}
\end{align}
Combining Eq. \eqref{eq:bound-for-w-prob-neg} and Eq. \eqref{eq:expect-from-one-x-risk} we infer \eqref{eq:w-cbound}:
\begin{align*}
R(B_Q) &\leq \E_{\mbf{X}} \frac{\hat{r}(\mbf{x})}{\delta(\mbf{x})}-\E_{\mbf{X}}\frac{1-\alpha(\mbf{x})}{\delta(\mbf{x})} = \omega_{\sss\mbf{P}}P(\hat M_\omega < 0) - \omega_{\sss\mbf{P}} + \psi_{\mbf{P}}
\leq \psi_{\mbf{P}} - \frac{\left(\mu_1^{\hat M, \mbf{P}}\right)^2}{\mu_2^{\hat M, \mbf{P}}}.
\end{align*} \end{proof} Given data with imperfect labels, the direct evaluation of the generalization error rate may be biased, leading to an overly optimistic evaluation. Using the mislabeling matrix $\mathbf{P}$ we derive a more conservative C-bound, where the error of $\mbf{x}$ is penalized by the factor $1/\delta(\mbf{x})$. When there is no mislabeling, $\psi_{\mbf{P}}=1$, $\mu_1^{\hat M, \mbf{P}}$ and $\mu_2^{\hat M, \mbf{P}}$ are equivalent to $\mu_1^{\hat M}$ and $\mu_2^{\hat M}$, so we obtain the regular C-bound \eqref{eq:prob-cbound}.
In particular, this general result can be used to evaluate the error rate in the semi-supervised setting when mislabeling arises from pseudo-labeling of unlabeled examples via self-learning. Comparing with the transductive bound \eqref{eq:TB} obtained as a corollary of Theorem \ref{thm:tr-bound-bayes-multi}, \eqref{eq:w-cbound} directly upper bounds the error rate, so it will be tighter in most of cases. Particularly, it can be noticed that the value of \eqref{eq:TB} is growing with the increase of the number of classes. Note that there exists other attempts to evaluate the C-bound in the semi-supervised setting. In the binary case, \cite{Lacasse:2007,Laviolette:2011} estimated the second margin moment using additionally unlabeled data by expressing it via disagreement of hypotheses. However, this holds for the binary case only.
\vasilii{In this theorem, we have combined the mislabeling bound \eqref{eq:one-x-mislabel-ineq} with the supervised multi-class C-bound \citep{Laviolette:2017}, however, another possibility could be to combine with the bound based on the second-order Markov's inequality \citep{Masegosa:2020}. As pointed out by \cite{Masegosa:2020}, the latter can be regarded as a relaxation of the C-bound, but it is easier to estimate from data in practice. Note that the tightest bound does not always imply the lowest error, so the use of C-bound in model selection tasks may be more advantageous as it involves both the individual strength of hypotheses and correlation between their errors, while the bound of \cite{Masegosa:2020} is based on the error correlation only.}
\subsection{PAC-Bayesian Theorem for C-Bound Estimation} When the margin mean, the margin variance and the mislabeling matrix are empirically estimated from data, evaluation of \eqref{eq:w-cbound} may be optimistically biased. In this section, we analyze the behavior of the estimate with respect to the sample size. To achieve that, we use the PAC-Bayesian theory initiated by \citet{Mcallester:1999,McAllester:2003} to derive a Probably Approximately Correct bound defined below.
\begin{thm} \label{thm:pac-bayesian-cbound} Under the notations of Theorem \ref{thm:w-cbound}, for any set of classifiers $\mathcal{H}$, for any prior distribution $P$ on $\mathcal{H}$ and any $\epsilon \in (0,1]$, with a probability at least $1-\epsilon$ over the choice of the sample of size $n=l+u$, for every posterior distribution $Q$ over $\mathcal{H}$, if $\mu^{\hat{M}}_1>0$ and $\tilde{\delta}(\mbf{x})>0$, we have:
\begin{align}
R(B_Q) \leq \tilde\psi - \frac{ \tilde{\mu}_1^2}{\tilde{\mu}_2}, \label{eq:pac-bayes-bound}
\end{align}
where \begin{align*}
\tilde{\mu}_1 &= \frac{1}{u}\sum_{i=1}^u(1/\tilde{\delta}(\mbf{x}))\sum_{c=1}^K M_Q(\mbf{x}, c) P(Y\!=\!c|\mbf{X}\!=\!\mbf{x}) - B_1 \sqrt{\frac{2}{u}\left[\kld{Q}{P} + \ln\frac{2\sqrt{u}}{\epsilon}\right]}
\\
\tilde{\mu}_2 &= \frac{1}{u}(1/\tilde{\delta}(\mbf{x}))\sum_{i=1}^u\sum_{c=1}^K (M_Q(\mbf{x}_i, c))^2 P(Y\!=\!c|\mbf{X}\!=\!\mbf{x}_i) + B_2 \sqrt{\frac{2}{u}\left[2\kld{Q}{P} + \ln\frac{2\sqrt{u}}{\epsilon}\right]}
\\
\tilde{\psi} &=
\frac{1}{u}\sum_{i=1}^u \frac{\tilde\alpha(\mbf{x}_i)}{\tilde\delta(\mbf{x}_i)} + B_3 \sqrt{\frac{2}{u} \ln\frac{2\sqrt{u}}{\epsilon}}\\
\tilde{\delta}(\mbf{x}) &= \hat{\delta}(\mbf{x})-\sqrt{\frac{1}{2l_{c_\mbf{x}}}\ln\frac{2\sqrt{l_{c_\mbf{x}}}}{\epsilon}}-\sqrt{\frac{1}{2l_{j_\mbf{x}}}\ln\frac{2\sqrt{l_{j_\mbf{x}}}}{\epsilon}},\text{ with } c_\mbf{x}:=B_Q(\mbf{x}), j_\mbf{x}:=\argmin_{j\in\mathcal{Y}\setminus\{c_\mbf{x})\}}l_j,\\
\tilde\alpha(\mbf{x}) &= \hat\alpha(\mbf{x}) + \sqrt{\frac{1}{2l_{c_\mbf{x}}}\ln\frac{2\sqrt{l_{c_\mbf{x}}}}{\epsilon}},
\end{align*}
and where $\hat{\delta}(\mbf{x})$ and $\hat\alpha(\mbf{x})$ are empirical estimates respectively of $\delta(\mbf{x})$ and $\alpha(\mbf{x})$ based on the available labeled set, $\kld{Q}{P}$ is the Kullback-Leibler divergence between $Q$ and $P$, and $l_j\!=\!\sum_{i = 1}^{l}\I{y_j=j}/l$ is the proportion of the labeled training examples from the true class $j$.
\end{thm} The proof is a combination of Propositions \ref{prop:pac-bayes-bound-first-moment}, \ref{prop:pac-bayes-bound-second-moment} and \ref{prop:pac-bound-psi} that are deferred to Appendix \ref{sec:appendix-cbound}.
Thus, by using Eq. \eqref{eq:pac-bayes-bound} we additionally penalize the C-bound by the sample size and the divergence between $Q$ and $P$. As $u$ grows, the penalization becomes less severe, so $\tilde{\mu}_1$ and $\tilde{\mu}_2$ are close to $\mu^{\hat{M}}_1$ and $\mu^{\hat{M}}_2$. Similarly, $\tilde{\delta}(\mbf x)$ and $\tilde{\alpha}(\mbf x)$ are closer to $\hat{\delta}(\mbf x)$ and $\hat{\alpha}(\mbf x)$ with the increase of the number examples used to estimate the mislabeling matrix, which we take $l$ for the sake of simplicity.
Note that, in contrast to the supervised case \citep[Theorem 3]{Laviolette:2017}, $B_1$ and $B_2$ can have a drastic influence on the bound's value, when $\tilde\delta(\mbf{x})$ is close to 0, which motivates in practice to use the $\lambda$-relaxation given by Eq. \eqref{eq:one-x-mislabel-ineq-with-lam}.
The obtained bound may be used to estimate the Bayes error from data, with the pseudo-labeled unlabeled examples serving as a hold-out set for estimating the margin moments, and the labeled examples serving as a hold-out set for estimating the mislabeling matrix. In the case of the random forest, the latter can be performed in the out-of-bag fashion as in \citep{Thiemann:2017,Lorenzen:2019}. However, the bound does not appear tighter in practice compared to the supervised case \citep{Laviolette:2017} due to the additional penalization on estimation of the mislabeling matrix. Making this bound tighter could be a good direction for future work. Nevertheless, when the focus is set on model selection, a common choice is to simply use an empirical estimate of the C-bound as an optimization criterion \citep{Bauvin:2020}.
\section{Experimental Results} \label{sec:num-exper}
In this section, we describe numerical experiments that was performed to validate our proposed framework. At first, we test in practice the multi-class self-learning algorithm (denoted by \texttt{MSLA}) described in Section \ref{sec:msla} by comparing its ability to learn on partially labeled data with other classification algorithms. Then, we illustrate the proposed \eqref{eq:w-cbound} on real data sets and analyze its behavior.
All experiments were performed on a cluster with an \texttt{Intel(R) Xeon(R) CPU E5-2640 v3} at \texttt{2.60GHz}, \texttt{32} cores, \texttt{256GB} of RAM, the \texttt{Debian 4.9.110-3 x86\_64} OS.
\subsection{Experimental Setup}
Experiments are conducted on publicly available data sets \citep{Dua:2019,Chang:2011,Xiao:2017}. Since we are interested in the practical use of our approach in the semi-supervised context, we would like to see if it has good performance when $l\ll u$. Therefore, we do not use the train/test splits that are proposed by data sources. Instead, we propose our own splits that makes a situation closer to the semi-supervised context. Each experiment is conducted 20 times, by randomly splitting an original data set on a labeled and an unlabeled parts keeping fixed their respective size at each iteration. The reported performance results are averaged over the 20 trials. We evaluate the performance as the accuracy score over the unlabeled training set (\texttt{ACC-U}).
In all our experiments, we consider the Random Forest algorithm \citep{Breiman:2001} (denoted by \texttt{RF}) with 200 trees and the maximal depth of trees as the majority vote classifier with the uniform posterior distribution. For an observation $\mbf{x}$, we evaluate the vector of class votes $\{v(\mbf{x}, i)\}_{i=1}^K$ by averaging over the trees the vote given to each class by the tree. A tree computes a class vote as the fraction of training examples in a leaf belonging to a class.
Experiments are conducted on 11 real data sets. The associated applications are image classification with the \texttt{Fashion} data set, the \texttt{Pendigits} and the \texttt{MNIST} databases of handwritten digits; a signal processing application with the \texttt{SensIT} data set for vehicle type classification and the human activity recognition \texttt{HAR} database; speech recognition using the \texttt{Vowel}, the \texttt{Isolet} and the \texttt{Letter} data sets; document recognition using the \texttt{Page Blocks} database; and finally applications to bioinformatics with the \texttt{Protein} and \texttt{DNA} data sets. The main characteristics of these data sets are summarized in Table~\ref{tab:data set-description}. \begin{table}[t]
\centering
\scalebox{0.82}{
\begin{tabular}{ccccc}
\toprule
Data set & \# of labeled examples, & \# of unlabeled examples, & Dimension, & \# of classes, \\
& $l$ & $u$ & $d$ & $K$ \\
\midrule
\texttt{Vowel} & 99 & 891 & 10 & 11 \\
\texttt{Protein} & 129 & 951 & 77 & 8 \\
\texttt{DNA} & 31 & 3155 & 180 & 3 \\
\texttt{PageBlocks} & 1094 & 4379 & 10 & 5 \\
\texttt{Isolet} & 389 & 7408 & 617 & 26 \\
\texttt{HAR} & 102 & 10197 & 561 & 6 \\
\texttt{Pendigits} & 109 & 10883 & 16 & 10 \\
\texttt{Letter} & 400 & 19600 & 16 & 26 \\
\texttt{Fashion} & 175 & 69825 & 784 & 10 \\
\texttt{MNIST} & 175 & 69825 & 784 & 10 \\
\texttt{SensIT} & 49 & 98479 & 100 & 3 \\
\bottomrule
\end{tabular}} \caption{Characteristics of data sets used in our experiments ordered by the size of the training set $(n=l+u)$.} \label{tab:data set-description} \end{table}
The proposed \texttt{MSLA} that automatically finds the threshold by minimizing the conditional Bayes error rate, is compared with the following baselines: \begin{itemize}
\item a fully supervised \texttt{RF} trained using only labeled examples. The approach is obtained at the initialization step of \texttt{MSLA} and once learned it is directly applied to predict the class labels of the whole unlabeled set;
\item the scikit-learn implementation \citep{scikit-learn} of the graph based, label spreading algorithm \citep{Zhou:2004} denoted by \texttt{LS};
\item the one-versus-all extension of a transductive support vector machine \cite{Joachims:1999} using the Quasi-Newton scheme. The approach was proposed by \citet{Gieseke:2014} ans is further denoted as \texttt{QN-S3VM}\footnote{The source code for the binary \texttt{QN-S3VM} is available at \url{http://www.fabiangieseke.de/index.php/code/qns3vm}.};
\item a semi-supervised extension of the linear discriminant analysis \texttt{Semi-LDA}, which is based on the contrastive pessimistic likelihood estimation proposed by \cite{Loog:2015};
\item a semi-supervised extension of the random forest \texttt{DAS-RF} proposed by \cite{Leistner:2009} where the classifier is repeatedly re-trained on the labeled and all the unlabeled examples with pseudo-labels optimized via deterministic annealing;
\item the multi-class extension of the classical self-learning approach (denoted by \texttt{FSLA}) described in \citet{Tur:2005} with a fixed prediction vote threshold;
\item a self-learning approach (denoted by \texttt{CSLA}) where the threshold is defined via curriculum learning by taking it as the $(1-t\cdot\Delta)$-th percentile of the prediction vote distribution at the step $t=1,2,\dots$ \citep{Cascantebonilla:2020}. \end{itemize}
As the size of the labeled training examples $|\mathrm{Z}_{\mathcal{L}}|$ is small, the hyperparameter tuning can not be performed properly. At the same time, the performance of baselines may be sensitive to some of their hyperparameters. For this reason, we compute \texttt{LS}, \texttt{QN-S3VM}, \texttt{Semi-LDA}, \texttt{DAS-RF} on a grid of parameters' values, and then choose a hyperparameter for which the performance is the best in average on 20 trials. We tune the RBF kernel parameter $\sigma\in\{10, 1.5, 0.5, 10^{-1}, 10^{-2}, 10^{-3}\}$ for \texttt{LS}, the regularization parameters $(\lambda,\lambda')\in\{10^{-1}, 10^{-2}, 10^{-3}\}^2$ for \texttt{QN-S3VM}, the learning rate $\alpha\in\{10^{-4},10^{-3},10^{-2}\}$ for \texttt{Semi-LDA}, the initial temperature $T_0\in\{10^{-3}, 5\cdot10^{-3}, 10^{-2}\}$ for \texttt{DAS-RF}. Other hyperparameters for these algorithms are left to their default values. Particularly, in \texttt{DAS-RF} the strength parameter and the number of iterations are respectively set to 0.1 and 10.
While the aforementioned parameters are rather data-dependent, the choice of $\theta$ for \texttt{FSLA} and $\Delta$ for \texttt{CSLA} depend more on what prediction vote distribution the base classifier outputs. After manually testing different values, we have found that \texttt{FSLA}$_{\theta=0.7}$ and \texttt{CSLA}${_\Delta=1/3}$ are good choices for the random forest. For \texttt{FSLA}, we terminate the learning procedure as soon as the algorithm makes 10 iterations, which reduces the computation time and may also improve the performance, since, in this case, the algorithm is less affected by noise. \cite{Cascantebonilla:2020} used for \texttt{CSLA} a slightly other architecture for self-learning, where the set of selected pseudo-labeled examples included just for one iteration (like if in Algorithm 1 Step 3 would be replaced by $\mathrm{Z}_{\mathcal{P}}\leftarrow S$). In our context, we have found that the performance of \texttt{CSLA} is identical for both two architectures.
\subsection{Illustration of MSLA} \label{sec:msla-exp}
In our setup, a time deadline is set: we stop computation for an algorithm if one trial takes more than 4 hours. Table \ref{tab:multi-class-exp-res} summarizes results obtained by \texttt{RF}, \texttt{LP}, \texttt{QN-S3VM}, \texttt{Semi-LDA}, \texttt{DAS-RF}, \texttt{FSLA}, \texttt{CSLA} and \texttt{MSLA}. We used bold face to indicate the highest performance rates and the symbol $\downarrow$ indicates that the performance is significantly worse than the best result, according to Mann-Whitney U test \citep{Mann:1947} used at the p-value threshold of 0.01.
\setlength{\tabcolsep}{0.45em}
\begin{table}[t]
\centering
{\scalebox{0.7}{
\begin{tabular}{l|cccccccc}
\toprule
Data set & \texttt{RF} & \texttt{LS} & \texttt{QN-S3VM} & \texttt{Semi-LDA} & \texttt{DAS-RF} & \texttt{FSLA$_{\,\bm{\theta} = 0.7}$} & \texttt{CSLA$_{\,\Delta = 1/3}$} & \texttt{MSLA} \\
\midrule
\texttt{Vowel} & $.586 \pm .028$ & $\textbf{.602} \pm .026$ & $.208^\downarrow \pm .029$ & .432$^\downarrow$ $\pm$ .029 & .587 $\pm$ .028 & .531$^\downarrow$ $\pm$ .034 & .576$^\downarrow$ $\pm$ .031 & .586 $\pm$ .026 \\
\midrule
\texttt{Protein} & $.764^\downarrow \pm .032$ & $.825 \pm .028$ & $.72^\downarrow \pm .034$ & \textbf{.842} $\pm$ .029 & .768$^\downarrow$ $\pm$ .036 & .687$^\downarrow$ $\pm$ .036 & .771$^\downarrow$ $\pm$ .035 & .781$^\downarrow$ $\pm$ .034 \\
\midrule
\texttt{DNA} & $.693^\downarrow \pm .074$ & $.584^\downarrow \pm .038$ & $\textbf{.815} \pm .025$ & .573$^\downarrow$ $\pm$ .037 & .693$^\downarrow$ $\pm$ .083 & .521$^\downarrow$ $\pm$ .095 & .671$^\downarrow$ $\pm$ .112 & .702$^\downarrow$ $\pm$ .082 \\
\midrule
\texttt{PageBlocks} & $.965 \pm .003$ & $.905^\downarrow \pm .004$ & $.931^\downarrow \pm .003$ & .935$^\downarrow$ $\pm$ .009 & .965 $\pm$ .003 & .964 $\pm$ .004 & .965 $\pm$ .003 & \textbf{.966} $\pm$ .002 \\
\midrule
\texttt{Isolet} & $.854^\downarrow \pm .016$ & $.727^\downarrow \pm .01$ & $.652^\downarrow \pm .016$ & .787$^\downarrow$ $\pm$ .019 & .859$^\downarrow$ $\pm$ .018 & .7$^\downarrow$ $\pm$ .04 & .843$^\downarrow$ $\pm$ .021 & \textbf{.875} $\pm$ .014 \\
\midrule
\texttt{HAR} & $.851 \pm .024$ & $.215^\downarrow \pm .05$ & $.78^\downarrow \pm .02$ & .743$^\downarrow$ $\pm$ .043 & .852 $\pm$ .024 & .81$^\downarrow$ $\pm$ .041 & .841 $\pm$ .029 & \textbf{.854} $\pm$ .026 \\
\midrule
\texttt{Pendigits} & $.863^\downarrow \pm .022$ & $\textbf{.916} \pm .013$ & $.675^\downarrow \pm .022$ & .824$^\downarrow$ $\pm$ .012 & .872$^\downarrow$ $\pm$ .023 & .839$^\downarrow$ $\pm$ .036 & .871$^\downarrow$ $\pm$ .029 & .884$^\downarrow$ $\pm$ .022 \\
\midrule
\texttt{Letter} & $.711 \pm .011$ & $.664^\downarrow \pm .01$ & $.064^\downarrow \pm .013$ & .589$^\downarrow$ $\pm$ .016 & .718 $\pm$ .012 & .651$^\downarrow$ $\pm$ .015 & \textbf{.72} $\pm$ .013 & .717 $\pm$ .013 \\
\midrule
\texttt{Fashion} & $.718 \pm .022$ & \texttt{NA} & \texttt{NA} & .537$^\downarrow$ $\pm$ .027 & .722 $\pm$ .023 & .64$^\downarrow$ $\pm$ .04 & .713 $\pm$ .026 & \textbf{.723} $\pm$ .023 \\
\midrule
\texttt{MNIST} & $.798^\downarrow \pm .015$ & \texttt{NA} & \texttt{NA} & .423$^\downarrow$ $\pm$ .029 & .822$^\downarrow$ $\pm$ .017 & .705$^\downarrow$ $\pm$ .055 & .829$^\downarrow$ $\pm$ .02 & \textbf{.857} $\pm$ .013 \\
\midrule
\texttt{SensIT} & $\textbf{.723} \pm .022$ & \texttt{NA} & \texttt{NA} & .647$^\downarrow$ $\pm$ .042 & \textbf{.723} $\pm$ .022 & .692$^\downarrow$ $\pm$ .023 & .713 $\pm$ .024 & .722 $\pm$ .021 \\
\bottomrule
\end{tabular}}} \caption{Classification performance on different data sets described in Table \ref{tab:data set-description}. The performance is computed using the accuracy score on the unlabeled training examples (\texttt{ACC-U}). The sign $^\downarrow$ shows if the performance is statistically worse than the best result on the level 0.01 of significance. \texttt{NA} indicates the case when the time limit was exceeded.} \label{tab:multi-class-exp-res} \end{table}
From these results it comes out that
\begin{itemize}
\item in 5 of 11 cases, the \texttt{MSLA} performs better than its opponents. On data sets \texttt{Isolet} and \texttt{MNIST} it significantly outperforms all the others, and it significantly outperforms the baseline \texttt{RF} on \texttt{Isolet}, \texttt{Pendigits} and \texttt{MNIST}\,(6\% improvement);
\item the \texttt{LS} and the \texttt{QN-S3VM} did not pass the scale over larger data sets (\texttt{Fashion}, \texttt{MNIST} and \texttt{SensIT}), while the \texttt{MSLA} did not exceeded 2 minutes per trial on these data sets (see Table \ref{tab:computationTime});
\item the performance of \texttt{LS} and \texttt{Semi-LDA} performance varies greatly on different data sets, which may be caused by the topology of data. In contrast, \texttt{MSLA} has more stable results over all data sets as it is based on the predictive score, and the \texttt{RF} is used as the base classifier;
\item since the \texttt{QN-S3VM} is a binary classifier by nature, its one-versus-all extension is not robust with respect to the number of classes. This can be observed on \texttt{Vowel}, \texttt{Isolet} and \texttt{Letter}, where the number of classes is high;
\item from our observation, both \texttt{LS} and \texttt{QN-S3VM} are highly sensitive to the choice of the hyperparameters. However, it is not very clear whether these hyperparameters can be properly tuned given a insufficient number of labeled examples. The same concern is applied to all the other semi-supervised baselines, while \texttt{MSLA} does not require any particular tuning since it finds automatically the threshold $\bm{\theta}$;
\item while the approach proposed by \cite{Loog:2015} always guarantees an improvement of the likelihood compared to the supervised case, we have observed that the classification accuracy is not always improved for \texttt{Semi-LDA} and may even degrade over the supervised linear discriminant analysis;
\item compared to the fully supervised approach, \texttt{RF}, the use of pseudo-labeled unlabeled training data (in \texttt{DAS-RF}, \texttt{FSLA}, \texttt{CSLA} or \texttt{MSLA}) may generally give no benefit or even degrade performance in some cases (\texttt{Vowel}, \texttt{PageBlocks}, \texttt{SensIT}). This may be due to the fact that the learning hypotheses are not met regarding the data sets where this effect is observed;
\item although for \texttt{DAS-RF} the performance is usually not degraded when $T_0$ is properly chosen, it has rather little improvement compared to \texttt{RF}. The performance of \texttt{FSLA} degrades most of the time, while degradation for \texttt{CSLA} is observed on 6 data sets. The latter suggests that the choice of the threshold for pseudo-labeling is crucial and challenging in the multi-class framework. Using the proposed criterion based on Eq. \eqref{eq:cond-bayes-error}, we can find the threshold efficiently;
\item from the results it can be seen that self-learning is also sensitive to the choice of the initial classifier. On some data sets, the number of labeled examples might be too small leading to a bad initialization of the first classifier trained over the labeled set. This implies that the initial votes are biased, so even with a well picked threshold we do not expect a great increase in performance (see Appendix \ref{sec:posterior-estimation} for more details).
\end{itemize}
\begin{figure}
\caption{Classification accuracy with respect to the proportion of unlabeled examples for the \texttt{MNIST} data set (a subsample of 3500 examples). On the graph, dots represent the average performance on the unlabeled examples over 20 random splits. For simplicity of illustration, the other considered algorithms are not displayed.}
\label{fig:SmallMNIST}
\end{figure}
We also analyze the behavior of the various algorithms for growing initial amounts of labeled data in the training set. Figure \ref{fig:SmallMNIST} illustrates this by showing the accuracy on a subsample of 3500 observations from \texttt{MNIST} of \texttt{RF}, \texttt{QN-S3VM}, \texttt{FSLA}$_{\bm{\theta}=0.7}$ and \texttt{MSLA} with respect to the percentage of the labeled training examples. In this graph, the performance of \texttt{LS} is not depicted, since it is significantly lower compared to the other methods under consideration. As expected, all performance curves increase monotonically with respect to the additional labeled data. When there are sufficient labeled training examples, \texttt{MSLA}, \texttt{FSLA} and \texttt{RF} actually converge to the same accuracy performance, suggesting that the labeled data carries out sufficient information and no additional information could be extracted from unlabeled examples.
Further, we present a comparison of the learning algorithms under consideration by analyzing their complexity. The time complexity of the random forest \texttt{RF} is $O(T d \tilde{l}\log^2 \tilde{l})$ \citep{Louppe:2014}, where $T$ is the number of decision trees in the forest and $\tilde{l}\approx 0.632\cdot l$ is the number of training examples used for each tree. Since \texttt{RF} is employed in \texttt{DAS-RF} and self-learning, the time complexity of \texttt{DAS-RF}, \texttt{FSLA} and \texttt{CSLA} is $O(C T d\tilde{n}\log^2 \tilde{n})$, where $C$ is the number of times \texttt{RF} has been learned, $\tilde{n}\approx 0.632\cdot n$. In our experimental setup, $C=11$ for \texttt{FSLA} and \texttt{DAS-RF}, and $C=1/\Delta +1 = 4$ for \texttt{CSLA}.
The time required for finding the optimal threshold at every iteration of the \texttt{MSLA} is $O(K^2 R^2 n)$, where $R$ is the sampling rate of the grid. From this we deduce that the complexity of \texttt{MSLA} is $O(C\max(T d n\log^2 n, K^2 R^2 n))$. As $n$ grows, the complexity is written as $O(d n\log^2 n)$, since $C, T, R$ are constant. This indicates a good scalability of all considered pseudo-labeling methods for large-scale data as they also have a memory consumption proportional to $nd$, so the computation can be performed on a regular PC even for the large-scale applications.
In the label spreading algorithm, an iterative procedure is performed, where at every step the affinity matrix is computed. Hence, the time complexity of the \texttt{LS} is $O(M n^2 d)$, where $M$ is the maximal number of iterations. From our observation, the convergence of \texttt{LS} is highly influenced by the value of $\sigma$ and the data topology. The time complexity of the \texttt{QN-S3VM} is $O(n^2 d)$ \citep{Gieseke:2014}. Both algorithms suffer from high run-time for large-scale applications. Since \texttt{LS} and \texttt{QN-S3VM} evaluate respectively the affinity matrix and the kernel matrix of size $n$ by $n$, these algorithms have also large space complexity proportional to $n^2$. From our observation, for the large-scale data (\texttt{Fashion}, \texttt{MNIST}, \texttt{SensIT}) the maximal resident set size\footnote{Maximal resident set size (maxRSS) is the peak portion of memory that was occupied in RAM during the run.} of \texttt{LS} and \texttt{QN-S3VM} may reach up to 200GB of RAM, which is practically infeasible with lack of resources.
Finally, the time complexity of \texttt{Semi-LDA} is $O(M\max(nd^2, d^3))$, where $M$ is the maximal number of iterations and $O(\max(nd^2, d^3))$ is the complexity of the linear discriminant analysis assuming $n>d$ \citep{Cai:2008}, and the space complexity is $O(nd)$. The approach pass the scale well with respect to the sample size, but may significantly slow down in the case of very large dimension. In Section \ref{sec:run-time}, we further analyze the time complexity empirically for all the methods under consideration.
\subsection{Illustration of (CBIL)} \label{sec:cbil-exp} In this section, we illustrate the value of \eqref{eq:w-cbound} evaluated on the unlabeled examples pseudo-labeled by \texttt{MSLA}. We study how the bound's value is penalized by the mislabeling model, so we empirically compare it with the oracle C-bound \eqref{eq:prob-cbound} evaluated as if the labels for the considered unlabeled data would be known.
To do so, we compute the value of the two bounds varying the number of examples used for evaluation with respect to the prediction confidence: the pseudo-labeled examples are sorted by the value of the prediction vote in the descending order, and we keep only the first $\rho\%$ of the examples for $\rho \in \{20, 40, 60, 80, 100\}$.
\begin{figure}
\caption{\eqref{eq:w-cbound} and Oracle C-Bound when varying the number of pseudo-labels on 4 data sets. We keep the most confident one (with respect to prediction vote) from $20\%$ to $100\%$.}
\label{fig:2}
\end{figure}
We use the votes of the current classifier and expect that with increase of $\rho$ we have more mislabels, so the \eqref{eq:w-cbound} is more penalized. In \eqref{eq:w-cbound}, we use the true value of the mislabeling matrix (i.e., evaluated using the labels of unlabeled data) for clear illustration of the C-bound's penalization. In Section \ref{sec:concl}, we discuss the possible estimations of the mislabeling matrix.
The experimental results on 4 data sets \texttt{HAR}, \texttt{Isolet}, \texttt{Letter} and \texttt{MNIST} are illustrated in Figure \ref{fig:2}.
As expected, the classifier makes mistakes mostly on low class votes, so the error increases when $\rho$ grows. One can see that on \texttt{Isolet}, \texttt{HAR} and \texttt{Letter} \eqref{eq:w-cbound} is close to the oracle C-bound for small $\rho$, since most of pseudo-labels are true. When more noisy pseudo-labels are included, the difference between the two values becomes more evident, leading \eqref{eq:w-cbound} to be more pessimistic. This is probably connected with the choice of the mislabeling error model \eqref{eq:mislab-model} that is class-related and not instance-related. Although we lose some flexibility, the class-related mislabeling matrix would be easier to estimate in practice. Finally, for \texttt{MNIST}, the two bounds are very close to each other, and the mislabeling is occasional, which is agreed with Table \ref{tab:multi-class-exp-res} as pseudo-labels are very helpful on this data set.
\section{Conclusion and Future Work} \label{sec:concl}
In this paper, we proposed a new probabilistic framework for the multi-class semi-supervised learning. At first, we derived a bound for the transductive conditional risk of the majority vote classifier. This probabilistic bound is based on the distribution of the class vote over unlabeled examples for a predicted class. We deduced corresponding bounds on the confusion matrix norm and the error rate as a corollary and determined when the bounds are tight.
Then, we proposed a multi-class self-learning algorithm where the threshold for selecting unlabeled data to pseudo-label is automatically found from minimization of the transductive bound on the majority vote error rate. From the numerical results, it came out that the self-learning algorithm is sensitive to the supervised performance of the base classifier, but it can better pass the scale and significantly outperform the case when the threshold is manually fixed.
However, the pseudo-labels produced by self-learning are imperfect, so we proposed a mislabeling error model to take explicitly into account these mislabeling errors. We established the connection between the true and the imperfect output and consequently extended the C-bound to imperfect labels, and derived a PAC-Bayesian Theorem for controlling the sample effect. The proposed bound allowed us to evaluate the performance of the learning model after pseudo-labeling the unlabeled data.
We illustrated the influence of the mislabeling error model on the bound's value on several real data sets.
We raise several open practical questions, which we detail below and leave as a subject for future work.\\ Firstly, the proposed self-learning policy has been experimentally validated when it is coupled with the random forest, but it would be interesting to test also with deep learning methods. This, however, is not straightforward. It is well known that the modern neural networks are not well calibrated, and examples are often misclassified with a high prediction vote \citep{Guo:2017}. This is a significant limitation in our case, since we make an assumption that the classifier makes its mistakes on examples with low prediction votes, which is used for the bound's approximation. Possible solutions include the use of neural network ensembles or temperature scaling.
\\ Secondly, further analysis of the learning model learned on pseudo-labels is perplexing due to the so-called \textit{confirmation bias}: at every iteration, the self-learning includes into the training set unlabeled examples with highly confident predictions, which arise from classifier's overconfidence to its initial decisions that could be erroneous. This implies that the hypotheses will have small disagreement on the unlabeled set after pseudo-labeling, so the votes are no more adequate for measuring prediction confidence. A correct estimation of mislabeling probabilities or changing the way self-learning is learned are possible solutions.\\
Thirdly, \eqref{eq:w-cbound} requires in practice the estimation of the mislabeling matrix, which is a complex problem, but an active field of study \citep{Natarajan:2013}. Most of these studies tackle this problem from an algorithmic point of view: for example, in the semi-supervised setting, \cite{Krithara:2008} learn the mislabeling matrix together with the classifier parameters through the classifier likelihood maximization for document classification;
in the supervised setting, a common approach is to detect anchor points whose labels are surely true \citep{Scott:2015}. A potential idea would be to transfer this idea to the semi-supervised case in order to detect the anchor points in the unlabeled set and use them together with the labeled set for correct estimation of the noise in pseudo-labels; this may require additional assumptions such as the existence of clusters \citep{Rigollet:2007,Maximov:2018} or manifold structure \citep{Belkin:2004}.
We also point out possible applications of \eqref{eq:w-cbound}. At first, the bound can be used for model selection tasks as semi-supervised feature selection \citep{Sheikhpour:2017}. Since minimization of the C-bound implies simultaneously maximization of the margin mean and minimization of the margin variance, \eqref{eq:w-cbound} would guide a feature selection algorithm to choose an optimal feature subset based on the labeled and the pseudo-labeled sets.
\\ Next, \eqref{eq:w-cbound} can be used as a criterion to learn the posterior $Q$ in the semi-supervised setting. This issue is actively studied in the supervised context, e.g., \cite{Roy:2016,Bauvin:2020} have been developed the boosting-based C-bound optimization algorithms.
\\ It should be noticed that for these two applications, the main objective is to rank models, so the best model has the minimal error on the unlabeled set. Hence, the bound analysis goes beyond the classical question of tightness: the tightest bound does not always imply the minimal error, and a bound relaxation can have a positive effect (see Appendix \ref{sec:relax_bound}).
\appendix \section{Tools for Section \ref{sec:tr-study}} \subsection{Tools for Theorem 3.2} \label{AppendixProofLemma} \begin{proof}[Proof of Lemma \ref{lem:connection-Gibbs-Bayes-multi}] First, we obtain Eq. \eqref{eq:lemma:gibbs:multi}: \begin{align*}
R_\mathcal{U}(G_Q,i,j) &= \frac{1}{u_i} \E_{h\sim Q}\sum_{\mathbf{x}\in X_\mathcal{U}} P(Y=i|X=\mathbf{x})\I{h(\mathbf{x}) = j} = \frac{1}{u_i} \sum_{\mathbf{x}\in X_\mathcal{U}} P(Y=i|X=\mathbf{x})v_Q(\mathbf{x},j) \\
&\geq \frac{1}{u_i} \sum_{\mathbf{x}\in X_\mathcal{U}} P(Y=i|X=\mathbf{x})v_Q(\mathbf{x},j)\I{B_Q(\mathbf{x})=j}\\
&= \frac{1}{u_i} \sum_{t=1}^{N_j}\sum_{\mathbf{x}\in X_\mathcal{U}} \left(P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)=\gamma^{(t)}_j}\right)\gamma^{(t)}_j= \sum_{t=1}^{N_j} b_{i,j}^{(t)}\gamma^{(t)}_j. \end{align*}
Then, we deduce Eq. \eqref{eq:lemma:bayes:multi}: \begin{align*}
R_\mathcal{U\wedge\bm{\theta}}(B_Q,i,j) &= \frac{1}{u_i} \sum_{\mathbf{x}\in X_\mathcal{U}} P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x}) = j}\I{v_Q(\mathbf{x},j)\geq \theta_j} \\
&= \frac{1}{u_i} \sum_{t=1}^{N_j}\sum_{\mathbf{x}\in X_\mathcal{U}} P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x}) = j}\I{v_Q(\mathbf{x},j)=\gamma^{(t)}_j}\I{\gamma^{(t)}_j\geq \theta_j} \\
&= \frac{1}{u_i} \sum_{t=k_j+1}^{N_j}\sum_{\mathbf{x}\in X_\mathcal{U}} P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x}) = j}\I{v_Q(\mathbf{x},j)=\gamma^{(t)}_j} = \sum_{t=k_j+1}^{N_j} b_{i,j}^{(t)}. \end{align*}
\end{proof}
\begin{lem}[Lemma 4 in \citet{Amini:2008}] \label{lem:sol-lin-prog} Let $(g_i)_{i \in \{ 1,\ldots,N\}}$ be such that $0<g_1<\dots<g_N\leq 1$. Consider also $p_i\geq 0$ for each $i\in\{1,\dots,N\}$, $B\geq 0$, $k\in\{1,\dots,N\}$. Then, the optimal solution of the linear program: \[ \begin{cases} \max_{\mbf{q}:=(q_1,\dots,q_N)} F(\mbf{q}) := \max_{q_1,\dots,q_N}\sum_{i=k+1}^N q_i\\ 0\leq q_i\leq p_i\quad \forall i \in \{ 1,\ldots,N\}\\ \sum_{i=1}^N q_i g_i\leq B \end{cases} \] will be $\mbf{q}^*$ defined as, for all $i\in \{ 1,\ldots,N\}$, $q^*_i=\min\left(p_i, \floor*{\frac{B-\sum_{j<i}q^*_jg_j}{g_i}}_+\right)\I{i>k}$; where, the sign $\floor{\cdot}_+$ denotes the positive part of a number, $\floor*{x}_+ = x\cdot \I{x>0}$.
\end{lem}
\begin{proof}[Proof of Lemma A.1]
It can be seen that the first $k$ target variables should be zero for the optimal solution. Indeed, they do not influence explicitly the target function $F$. However, terms $g_iq_i$ for $i \in \{1,\ldots, k\}$ are positive, so their increase leads to smaller values of $q_i$ for $i \in \{k+1,\ldots, N\}$, which in their turn decrease the value of $F$. Because of this, we look for a solution in a space $\mathcal{O}=\{0\}^k\times \prod_{i=k+1}^N [0,p_i]$. We aim to show that there is a unique optimal solution $\mbf{q}^*$ in $\mathcal{O}$.
\textbf{Existence.} It is known that the linear program under consideration is a convex, feasible and bounded task. Hence, there is a feasible optimal solution $\mbf{q}^{opt}\in\prod_{i=1}^N[0, p_i]$. Then, we define $\mbf{q}^{opt,\mathcal{O}}\in\mathcal{O}$: \[ \begin{cases} q_i^{opt,\mathcal{O}} = q_i^{opt} & \text{if } i>k\\ q_i^{opt,\mathcal{O}} = 0 & \text{otherwise}. \end{cases} \] It can be seen that this solution is feasible: $F(\mbf{q}^{opt,\mathcal{O}}) = F(\mbf{q}^{opt})$. Then, there exists an optimal solution in $\mathcal{O}$. Further, the optimal solution is again designated as $\mbf{q}^*$.
\textbf{Unique representation.} We would like to find a representation of $\mbf{q}^*$ that is, in fact, unique. Before doing it, one can notice that for $\mbf{q}^*$ the following equation is necessarily true: \[
\sum_{i=1}^N q_i^*g_i = B. \] Indeed, as $g_i$ are fixed, $\mbf{q}^*$ would not be optimal otherwise, and there would exist $\tilde{\mbf{q}}$ such that $\sum_{i=1}^N \tilde{q}_ig_i > \sum_{i=1}^N q_i^*g_i $, which implies $F(\tilde{\mbf{q}})>F(\mbf{q}^*)$.
Let's consider the lexicographic order $\succeq$: {\small{ \begin{multline*} \forall(\mbf{q},\mbf{q}')\in\R^N\times\R^N, \mbf{q}\succeq \mbf{q}' \Leftrightarrow \left\{\mathcal{I}(\mbf{q}',\mbf{q}) = \emptyset\right\}\ \vee \left\{\mathcal{I}(\mbf{q}',\mbf{q}) \not= \emptyset \wedge \min\left(\mathcal{I}(\mbf{q},\mbf{q}')\right)<\min\left(\mathcal{I}(\mbf{q}',\mbf{q})\right)\right\}, \end{multline*} }}
where $\mathcal{I}(\mbf{q}',\mbf{q}) = \{i|q'_i>q_i\}$.
We aim to show that the optimal solution is actually the greatest feasible solution in $\mathcal{O}$ \\for $\succeq$. Let $\mathcal{M}$ be the set$\{i>k|q^*_i<p_i\}$. Then, there are two cases: \begin{itemize}
\item $M=\emptyset$. It means that for all $i>k$, $q^*_i=p_i$ and $\mbf{q}^*$ is then the maximal element for $\succeq$ in $\mathcal{O}$.
\item $M\not=\emptyset$. Let's consider $K=\min\{i>k|q^*_i<p_i\},\ M = \mathcal{I}(\mbf{q},\mbf{q}^*)$. By contradiction, suppose $\mbf{q}^*$ is not the greatest feasible solution for $\succeq$ and there is $\mbf{q}\in\R^N$ such that $\mbf{q}\succ \mbf{q}^*$.
\begin{enumerate}
\item $M\leq k$. Then, $q_M > q^*_M = 0$. It implies that $\mbf{q}\not\in\mathcal{O}$.
\item $k<M<K$. Then, $q_M > q^*_M = p_M$. The same, $\mbf{q}\not\in\mathcal{O}$.
\item $M\geq K$. Then, $F(\mbf{q})>F(\mbf{q}^*)$. But it means that $\sum_{i=1}^N q_ig_i > \sum_{i=1}^N q_i^*g_i = B$.
\end{enumerate} \end{itemize}
Hence, we conclude that if the solution is optimal then it is necessarily the greatest feasible solution for $\succeq$. Let's prove that if a solution is not the greatest feasible one then it can not be optimal. With this statement, uniqueness would be proven.
Consider $\mbf{q}\in\mathcal{O}$ such that $\mbf{q}^*\succ \mbf{q}$. \begin{itemize}
\item $\mathcal{I}(\mbf{q},\mbf{q}^*) = \emptyset$. Then, $F(\mbf{q}^*)>F(\mbf{q})$ and $\mbf{q}$ is not optimal.
\item $\mathcal{I}(\mbf{q},\mbf{q}^*) \not= \emptyset$. Let $K=\min\left(\mathcal{I}(\mbf{q}^*,\mbf{q})\right)$ and $M = \min\left(\mathcal{I}(\mbf{q},\mbf{q}^*)\right)$. Then, $q_M>q^*_M\geq 0$ and $K<M$.
Denote $\lambda=\min\left(q_M, \frac{g_M}{g_K}(p_K-q_K)\right)$ and define $\mbf{q}'$ by:
\[
q'_i = q_i,\ i\not\in\{K,M\}, \quad q'_K = q_K + \frac{g_M}{g_K}\lambda \quad q'_M = q_M-\lambda
\] \end{itemize} It can be observed that $\mbf{q}'$ satisfies the box constraints. Moreover, $F(\mbf{q}') = F(\mbf{q})+\lambda(g_M/g_K - 1)>F(\mbf{q})$ since $g_K<g_M$ and $\lambda>0$. Thus, $\mbf{q}$ is not optimal. Summing up, it is proven that there is the only optimal solution in $\mathcal{O}$ and it is the greatest feasible one for $\succeq$.
Then, let's obtain an explicit representation of this solution. As it is the greatest one in lexicographical order, we assign $q_i$ for $i>k$ to maximal feasible values, which are $p_i$. It continues until the moment when $\sum_{j=1}^i q_ig_i$ is close to $B$. Denote by $I$ the index such that $\sum_{i=1}^{I-1} p_ig_i\leq B$, but $\sum_{i=1}^{I} p_ig_i\geq B$. \begin{itemize}
\item $\sum_{i=1}^{I-1} p_ig_i=B$. Then, $q_i = 0$ for $i\geq I$. It can be also written in the following way:
$$q_i = \floor*{\frac{B-\sum_{j<i}q_jg_j}{g_i}}_+, \qquad i\geq I$$.
\item $\sum_{i=1}^{I-1} p_ig_i<B$. Then, $q_I$ is equal to residual:
$$q_I = \frac{B-\sum_{j<I}q_jg_j}{g_I} = \floor*{\frac{B-\sum_{j<I}q_jg_j}{g_I}}_+.$$
For the other $q_i$, $i>I$ we assign to 0. \end{itemize} \end{proof}
\subsection{Tools for Proposition \ref{prop:tight-bayes-multi}} \begin{lem} \label{lem:lem-for-proposition} For all $\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}$, for all $(i,j)\in \{1,\ldots,K\}^2,$ the following inequality holds: \begin{multline} \label{eq:prop-multi:1.1}
R_\mathcal{U}(B_Q,i,j) \geq \frac{1}{u_i}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)<\gamma^*} \\+ \frac{1}{\gamma^*}\floor*{\floor{K_{i,j}-M_{i,j}^<(\gamma^*)}_+ - r_{i,j}}_+ + r_{i,j}, \end{multline}
where $\gamma^* := \sup\{\gamma\in\Gamma_j|\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)=\gamma}/u_i> \tau\}$.
\end{lem} \begin{proof} Denote $\gamma^*=\gamma_j^{(p)}$. According to Lemma \ref{lem:connection-Gibbs-Bayes-multi},
$K_{i,j} = \sum_{n=1}^{N_j} b^{(n)}_{i,j}\gamma^{(n)}_j$
, where $b_{i,j}^{(n)} := \frac{1}{u_i}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)=\gamma^{(n)}_j}$. We can express $b_{i,j}^{(p)}$ in the following way: \[ b_{i,j}^{(p)} = \frac{K_{i,j} - \sum_{n=1}^{p-1} b^{(n)}_{i,j}\gamma^{(n)}_j - \sum_{n=p+1}^{N_j} b^{(n)}_{i,j}\gamma^{(n)}_j}{\gamma_j^{(p)}} = \frac{K_{i,j} - \sum_{n=1}^{p-1} b^{(n)}_{i,j}\gamma^{(n)}_j - r_{i,j}}{\gamma_j^{(p)}}. \]
Remind $B^{(n)}_{i,j} = \frac{1}{u_i}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{v_Q(\mathbf{x},j)=\gamma^{(n)}_j}$. From this we derive the following: $$-\sum_{n=1}^{p-1} b^{(n)}_{i,j}\gamma^{(n)}_j \geq -\sum_{n=1}^{p-1} B^{(n)}_{i,j}\gamma^{(n)}_j = - M_{i,j}^<(\gamma_j^{(p)})= - M_{i,j}^<(\gamma^*).$$ Taking into account this as well as $b_{i,j}^{(p)}\geq 0$, we deduce a lower bound for $b_{i,j}^{(p)}$: \begin{equation} \label{eq:prop-multi:2} b_{i,j}^{(p)}\geq\frac{1}{\gamma^*}\floor{K_{i,j}-M_{i,j}^<(\gamma^*) - r_{i,j}}_+ = \frac{1}{\gamma^*}\floor*{\floor{K_{i,j}-M_{i,j}^<(\gamma^*)}_+ - r_{i,j}}_+. \end{equation} Also, taking into account Lemma \ref{lem:connection-Gibbs-Bayes-multi}, one can notice that: \begin{align} \label{eq:prop-multi:3} R_\mathcal{U}(B_Q,i,j) &= \sum_{n=1}^{N_j} b_{i,j}^{(n)} =\sum_{n=1}^{p-1}b_{i,j}^{(n)} + b_{i,j}^{(p)} + \sum_{n=p+1}^{N_j}b_{i,j}^{(n)} \nonumber \\
&\geq \frac{1}{u_i}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)<\gamma^*} + b_{i,j}^{(p)} + r_{i,j}, \end{align} since $\sum_{n=p+1}^{N_j}b_{i,j}^{(n)}\geq \sum_{n=p+1}^{N_j}b_{i,j}^{(n)}\gamma_j^{(n)}$. Combining Eq. \eqref{eq:prop-multi:2} and Eq. \eqref{eq:prop-multi:3} we infer Eq. \eqref{eq:prop-multi:1.1}: \begin{multline*}
R_\mathcal{U}(B_Q,i,j) \geq \frac{1}{u_i}\sum_{\mathbf{x}\in\mathrm{X}_{\sss\mathcal{U}}}P(Y=i|X=\mathbf{x})\I{B_Q(\mathbf{x})=j}\I{v_Q(\mathbf{x},j)<\gamma^*} \\+ \frac{1}{\gamma^*}\floor*{\floor{K_{i,j}-M_{i,j}^<(\gamma^*)}_+ - r_{i,j}}_+ + r_{i,j}. \end{multline*} \end{proof}
\section{Tools for Section \ref{sec:c-bound}} \label{sec:appendix-cbound} \subsection{Tools for Theorem \ref{thm:prob-cbound}} \begin{lem}[Cantelli-Chebyshev inequality][Ex 2.3 in \cite{MassartBook}] \label{lem:cantelli-chebyshev}
Let $Z$ be a random variable with the mean $\mu$ and the variance $\sigma^2$. Then, for every $a>0$, we have:
\[
P(Z\leq \mu - a) \leq \frac{\sigma^2}{\sigma^2 + a^2}.
\] \end{lem} \subsection{Tools for Theorem \ref{thm:pac-bayesian-cbound}}
\subsubsection{Bounds for the Mislabeling Matrix' Entries} We remind that the imperfection is summarized through the mislabeling matrix $\mathbf{P} = (p_{i,j})_{1\leq i,j \leq K}$ with \begin{align*}
p_{i, j} := P(\hat Y=i|Y=j) \quad\text{ for all } (i,j)\in \{1,\dots,K\}^2 \end{align*} such that $\sum_{i=1}^K p_{i,j} = 1$. Also, recall that $\delta(\mbf{x}) := p_{B_Q(\mbf{x}), B_Q(\mbf{x})} - \max_{j\in\mathcal{Y}\setminus\{B_Q(\mbf{x})\}} p_{B_Q(\mbf{x}), j}$ and $\alpha(\mbf{x})=p_{B_Q(\mbf{x}), B_Q(\mbf{x})}$.
\begin{prop} \label{prop:pac-bound-mislab-mat} Let $\mbf{P}$ be the mislabeling matrix, and assume that $p_{i,i}> p_{i,j},\ \forall{i,j}\in\{1,\dots,K\}^2$. For any $\epsilon \in (0,1]$, with probability $1-\epsilon$ over the choice of the $l$ sample, for all $(i,j)\in \{1,\ldots,K\}^2$, for all $\mbf{x}\in\mathcal{X}$, \begin{align}
&\hat{p}_{j,c} - r(l_c) \leq p_{j,c} \leq \hat{p}_{j,c} + r(l_c), \label{eq:pac-mislab-entry}\\
&\alpha(\mbf{x}) \leq \hat{\alpha}(\mbf{x}) + r(l_{c_\mbf{x}}), \label{eq:pac-alpha}\\
&\frac{1}{\delta(\mbf{x})} \leq \frac{1}{\hat{\delta}(\mbf{x}) - r(l_{c_\mbf{x}}) - r(l_{j_\mbf{x}})},\ \text{ if } \hat{\delta}(\mbf{x}) \geq r(l_{c_\mbf{x}}) + r(l_{j_\mbf{x}}), \label{eq:pac-delta} \end{align} where \begin{itemize}
\item $r(l_k) = \sqrt{\frac{1}{2l_k}\ln\frac{2\sqrt{l_k}}{\epsilon}}$,
\item $l_k = \sum_{i = 1}^{l}\I{y_i=k}/l$ is the proportion of the labeled training examples from the true class $k$,
\item $c_\mbf{x}:=B_Q(\mbf{x})$, $j_\mbf{x}:=\argmin_{j\in\mathcal{Y}\setminus\{c_\mbf{x})\}}l_j$,
\item $\hat{p}_{j,c}$, $\hat{\alpha}(\mbf{x})$ and $\hat{\delta}(\mbf{x})$ are empirical estimates respectively of $p_{j,c}$, $\alpha(\mbf{x})$ and $\delta(\mbf{x})$ based on the available $l$ sample. \end{itemize} \end{prop} \begin{proof} Let $S_{j}$ denote the subset of the available examples for which the true class is $j$. Consider the non-negative random variable $\exp\left\{2 l_j(\hat{p}_{i,j}-p_{i,j} )^2\right\}$.
From the Markov inequality we obtain that the following holds with probability at least $1-\epsilon$ over $S_j\sim P(\mbf{X}|Y=j)^{l_j}$: \begin{align} \label{eq:th-b7-1} \exp\left\{2 l_j(\hat{p}_{i,j}-p_{i,j} )^2\right\} \leq \frac{1}{\delta}\E_{S_j} \exp\left\{2 l_j(\hat{p}_{i,j}-p_{i,j} )^2 \right\}. \end{align} By successively applying Lemma \ref{lem:pinsker} and Lemma \ref{prop:Maurer}, we deduce that \begin{align}
\E_{S_j} \exp\left\{2 l_j(\hat{p}_{i,j}-p_{i,j} )^2 \right\} &\leq \E_{S_j} \exp\left\{ l_j\cdot kl(\hat{p}_{i,j}||p_{i,j} ) \right\}
\leq 2\sqrt{l_j}. \label{eq:th-b7-2} \end{align}
Combining Eq. \eqref{eq:th-b7-1} and Eq. \eqref{eq:th-b7-2}, we infer $2 l_j(\hat{p}_{i,j}-p_{i,j} )^2\leq \ln\left(2\sqrt{l_j}/\delta\right)$. Eq. \eqref{eq:pac-mislab-entry} is directly obtained from the last inequality, and hence, we derive also Eq. \eqref{eq:pac-alpha}. To prove Eq. \eqref{eq:pac-delta}, let us define $$k_{\mbf{x}} := \argmax_{k\in\mathcal{Y}\setminus\{B_Q(\mbf{x})\}} p_{c_\mbf{x}, k}, \qquad \hat{k}_{\mbf{x}} := \argmax_{k\in\mathcal{Y}\setminus\{B_Q(\mbf{x})\}} \hat{p}_{c_\mbf{x}, k}.$$ Then, we write: \begin{align*}
\frac{1}{\delta(\mbf{x})} &= \frac{1}{p_{c_\mbf{x}, c_\mbf{x}}-p_{c_\mbf{x}, k_\mbf{x}}}\leq \frac{1}{p_{c_\mbf{x}, c_\mbf{x}}-p_{c_\mbf{x}, k_\mbf{x}} - r(l_{c_\mbf{x}}) - r(l_{k_\mbf{x}})}\\
&\leq \frac{1}{p_{c_\mbf{x}, c_\mbf{x}}-p_{c_\mbf{x}, \hat{k}_\mbf{x}} - r(l_{c_\mbf{x}}) - r(l_{j_\mbf{x}})} =
\frac{1}{\hat{\delta}(\mbf{x}) - r(l_{c_\mbf{x}}) - r(l_{j_\mbf{x}})}. \end{align*} These transitions hold only when the denominator is positive, which is ensured if $\hat{\delta}(\mbf{x}) \geq r(l_{c_\mbf{x}}) + r(l_{j_\mbf{x}})$. \end{proof}
\begin{lem}[Pinsker’s Inequality for Bernoulli random variables, Theorem 4.19 in \cite{MassartBook}] \label{lem:pinsker} For all $p_1,p_2\in[0,1]^2$, \begin{align*}
&2(p_2\!-\!p_1)^2 \leq kl(p_2||p_1)\\
&kl(p_2||p_1)\!:=\! p_2\ln\frac{p_2}{p_1}+(1\!-\!p_2)\ln\frac{1\!-\!p_2}{1\!-\!p_1} = \kld{P_2}{P_1}, \end{align*} where $P_2$ and $P_1$ are Bernoulli distributions with parameters $p_2$ and $p_1$ respectively. \end{lem}
\begin{lem}[Theorem 1 in \cite{Maurer:2004} and Lemma 19 in \cite{Germain:2015}] \label{prop:Maurer} Let $\mathbf{X}=(X_1,\dots, X_n)$ be a random vector, whose components $X_i$ are i.i.d. with values $\in[0,1]$ and expectation $\mu$. Let $\mathbf{X'}=(X_1',\dots, X_n')$ denotes a random vector, where each $X_i'$ is the unique Bernoulli random variable of the corresponding $X_i$: $P(X_i'=1)=\E X_i'=\E X_i=\mu,\ \forall i\in\{1,\dots,n\}$. Then, \begin{align*}
\E\left[e^{n\kld{\bar{X}}{\mu}}\right]\leq \E\left[e^{n\kld{\bar{X}'}{\mu}}\right] \leq 2\sqrt{n}, \end{align*} where $\bar{X} = \frac{1}{n}\sum_{i=1}^n X_i$ and $\bar{X}' = \frac{1}{n}\sum_{i=1}^n X_i'$. \end{lem}
\subsubsection{Lower Bound of the First Moment of the Margin} \begin{prop} \label{prop:pac-bayes-bound-first-moment}
Let $\hat{M}$ be a random variable such that $[\hat{M}|\mbf{X}=\mbf{x}]$ is a discrete random variable that is equal to the margin $M_Q(\mbf{x}, j)$ with probability $P(\hat{Y}\!=\!j|\mbf{X}\!=\!\mbf{x})$, $j=\{1,\dots,K\}$. Let $\mu^{\hat{M}}_1$ be defined as in Theorem \ref{thm:w-cbound}. Given the conditions of Proposition \ref{prop:pac-bound-mislab-mat}, for any set of classifiers $\mathcal{H}$, for any prior distribution $P$ on $\mathcal{H}$ and any $\epsilon \in (0,1]$, with a probability at least $1-\epsilon$ over the choice of the $n$ sample, for every posterior distribution $Q$ over $\mathcal{H}$ \begin{align*}
\mu^{\hat{M}, \mbf{P}}_1 \geq \bar\mu^{S}_1 - B_1 \sqrt{\frac{2}{n}\left[\kld{Q}{P} + \ln\frac{2\sqrt{n}}{\delta}\right]}, \end{align*} where \begin{itemize}
\item $\bar\mu^{S}_1=\frac{1}{n}\sum_{i=1}^n(1/\tilde{\delta}(\mbf{x}))\sum_{c=1}^K M_Q(\mbf{x}, c) P(Y\!=\!c|\mbf{X}\!=\!\mbf{x})$ is the empirical weighted margin mean based on the available $n$-sample $S$,
\item $\tilde{\delta}(\mbf{x}):=\hat{\delta}(\mbf{x}) - r(l_{c_\mbf{x}}) - r(l_{j_\mbf{x}})$,
\item $B_1 := \max_{x\in\mathcal{X}}|(1/\tilde{\delta}(\mbf{x}))\sum_{c=1}^K M_Q(\mbf{x}, c) P(Y\!=\!c|\mbf{X}\!=\!\mbf{x})|$,
\item $KL$ denotes the Kullback–Leibler divergence. \end{itemize} \end{prop} \begin{proof}[Proof]
Further, we denote the available sample with imperfect labels by $S$.
Let $\mu^{\hat{M}, \mbf{P}, h}_1$ and $\bar\mu^{S, h}_1$ be the random variables such that $\mu^{\hat{M}, \mbf{P}}_1 = \E_{h\sim Q} \mu^{\hat{M}, \mbf{P}, h}_1$ and $\bar\mu^{S}_1 = \E_{h\sim Q} \bar\mu^{S, h}_1$.
We apply the Markov inequality to $\E_{h\sim P}\exp\left\{\frac{n}{2 B_1^2}(\bar\mu^{S, h}_1-\mu^{\hat{M},\mbf{P}, h}_1)^2\right\}$, which is a non-negative random variable, and obtain that with probability at least $1-\epsilon$ over $S\sim P(\mbf{X}, \hat{Y})^n$: \begin{align}
\E_{h\sim P}\exp\left\{\frac{n}{2 B_1^2}(\bar\mu^{S, h}_1-\mu^{\hat{M},\mbf{P}, h}_1)^2\right\}
\leq \frac{1}{\epsilon} \E_{S} \E_{h\sim P}\exp\left\{\frac{n}{2 B_1^2}(\bar\mu^{S, h}_1\!-\!\mu^{\hat{M},\mbf{P}, h}_1)^2\right\}. \label{eq:markov} \end{align} Since the prior distribution $P$ over $\mathcal{H}$ is independent on $S$, we can swap $\E_{S}$ and $\E_{h\sim P}$. One can notice that $$\frac{1}{2 B_1^2}(\bar\mu^{S, h}_1-\mu^{\hat{M},\mbf{P}, h}_1)^2 = 2\left[\frac{1}{2}(1\!-\!\frac{\bar\mu^{S, h}_1}{B_1})\!-\!\frac{1}{2}(1\!-\!\frac{\mu^{\hat{M},\mbf{P}, h}_1}{B_1})\right]^2,$$ which is the squared of the difference of two random variables that are both between 0 and 1. Then, we successively apply Lemma \ref{lem:pinsker} and Lemma \ref{prop:Maurer} deriving that: \begin{align*}
&\E_{h\sim P} \E_{S} \exp\left\{2n\left[\frac{1}{2}\left(1\!-\!\frac{\bar\mu^{S, h}_1}{B_1}\right)\!-\!\frac{1}{2}\left(1\!-\!\frac{\mu^{\hat{M},\mbf{P}, h}_1}{B_1}\right)\right]^2\right\} \\
&\leq \E_{h\sim P} \E_{S} \exp\left\{n\cdot kl\left(\frac{1}{2}(1\!-\!\frac{\bar\mu^{S, h}_1}{B_1})\right|\left|\frac{1}{2}(1\!-\!\frac{\mu^{\hat{M},\mbf{P}, h}_1}{B_1})\right) \right\}
\leq \E_{h\sim P} 2\sqrt{n} = 2\sqrt{n}. \end{align*}
We apply this result for Eq. \eqref{eq:markov}, and by taking the natural logarithm from the both sides we obtain that: \begin{align} \label{eq:bounded-by-2sqrtn}
\ln\left(\E_{h\sim P}\exp\left\{\frac{n}{2 B_1^2}(\bar\mu^{S, h}_1\!-\!\mu^{\hat{M},\mbf{P}, h}_1)^2\right\}\right) \leq \ln\left(\frac{2\sqrt{n}}{\epsilon}\right). \end{align}
Using the change of measure (Lemma \ref{lem:seldin-lem}) and the Jensen's inequalities, we derive that: \begin{align*} \ln\left(\E_{h\sim P}\exp\left\{\frac{n}{2 B_1^2}(\bar\mu^{S, h}_1-\mu^{\hat{M},\mbf{P}, h}_1)^2\right\}\right) &\geq \E_{h\sim Q} \frac{n}{2 B_1^2}(\bar\mu^{S, h}_1-\mu^{\hat{M},\mbf{P}, h}_1)^2 - \kld{Q}{P}\\ &\geq \frac{n}{2 B_1^2}(\E_{h\sim Q}\bar\mu^{S, h}_1- \E_{h\sim Q}\mu^{\hat{M},\mbf{P}, h}_1)^2 - \kld{Q}{P}. \end{align*}
Combining with Eq. \eqref{eq:bounded-by-2sqrtn}, we derive: \begin{align} \label{eq:almost-final} \frac{n}{2 B_1^2}(\bar\mu^{S}_1- \mu^{\hat{M},\mbf{P}}_1)^2 \leq \ln\left(\frac{2\sqrt{n}}{\epsilon}\right) + \kld{Q}{P}. \end{align} The final inequality is directly inferred from Eq. \eqref{eq:almost-final}.
\begin{lem}[Change of Measure Inequality \cite{Donsker:1975}] \label{lem:seldin-lem} For any measurable function $ \phi$ defined on the hypothesis space $\mathcal{H}$ and all distributions $P, Q$ on $\mathcal{H}$, the following inequality holds: \[ \E_{h\sim Q}\phi(h) \leq \kld{Q}{P} + \ln\E_{h\sim P}e^{\phi(h)}. \] \end{lem} \end{proof}
\subsubsection{Other Required Bounds}
\begin{prop} \label{prop:pac-bayes-bound-second-moment}
Let $\hat{M}$ be a random variable such that $[\hat{M}|\mbf{X}=\mbf{x}]$ is a discrete random variable that is equal to the margin $M_Q(\mbf{x}, j)$ with probability $P(\hat{Y}\!=\!j|\mbf{X}\!=\!\mbf{x})$, $j=\{1,\dots,K\}$. Let $\mu^{\hat{M}, \mbf{P}}_2$ be defined as in Theorem \ref{thm:w-cbound}. Given the conditions of Proposition \ref{prop:pac-bound-mislab-mat}, for any set of classifiers $\mathcal{H}$, for any prior distribution $P$ on $\mathcal{H}$ and any $\epsilon \in (0,1]$, with a probability at least $1-\epsilon$ over the choice of the $n$ sample, for every posterior distribution $Q$ over $\mathcal{H}$ \begin{align*}
\mu^{\hat{M}, \mbf{P}}_2 \leq \bar\mu^{S}_2 + B_2 \sqrt{\frac{2}{n}\left[2\kld{Q}{P} + \ln\frac{2\sqrt{n}}{\epsilon}\right]}, \end{align*} where \begin{itemize}
\item $\bar\mu^{S}_2=\frac{1}{n}(1/\tilde{\delta}(\mbf{x}))\sum_{i=1}^n\sum_{c=1}^K (M_Q(\mbf{x}_i, c))^2 P(Y\!=\!c|\mbf{X}\!=\!\mbf{x}_i)$ is the empirical weighted 2nd margin moment based on the available $n$-sample $S$,
\item $\tilde{\delta}(\mbf{x}):=\hat{\delta}(\mbf{x}) - r(l_{c_\mbf{x}}) - r(l_{j_\mbf{x}})$,
\item $B_2 := \max_{x\in\mathcal{X}}|(1/\tilde{\delta}(\mbf{x}))\sum_{c=1}^K (M_Q(\mbf{x}, c))^2 P(Y\!=\!c|\mbf{X}\!=\!\mbf{x})|$,
\item $KL$ denotes the Kullback–Leibler divergence. \end{itemize} \end{prop}
\begin{proof} The proof is similar to the one given for Proposition \ref{prop:pac-bayes-bound-first-moment}, but relies on the extension of the change of measure inequality (Lemma \ref{lem:laviolette-2017}). \begin{lem}[Change of Measure Inequality for Pairs of Voters (Lemma 1 in \cite{Laviolette:2017})] \label{lem:laviolette-2017}
For any set of voters $\mathcal{H}$, for any distributions $P, Q$ on $\mathcal{H}$, and for any measurable function $\phi:\ \mathcal{H}\times\mathcal{H}\to\R$, the following inequality holds: \[ \E_{(h,h')\sim Q^2}\phi(h, h') \leq 2\kld{Q}{P} + \ln\E_{(h, h')\sim P^2}e^{\phi(h, h')}. \] \end{lem} \end{proof}
\begin{prop} \label{prop:pac-bound-psi} Given the conditions of Proposition \ref{prop:pac-bound-mislab-mat}, for any $\epsilon \in (0,1]$, with a probability at least $1-\epsilon$ over the choice of the $n$ sample, \begin{align*}
\psi_{\mbf{P}} \leq \frac{1}{n}\sum_{i=1}^n \frac{\hat\alpha(\mbf{x}_i)+r(l_{c_\mbf{x}})}{\hat\delta(\mbf{x}_i)-r(l_{c_\mbf{x}})-r(l_{j_\mbf{x}})} + B_3 \sqrt{\frac{2}{n} \ln\frac{2\sqrt{n}}{\epsilon}}, \end{align*} where $B_3 := \max_{x\in\mathcal{X}}[\hat\alpha(\mbf{x}_i)+r(l_{c_\mbf{x}})]/[\hat\delta(\mbf{x}_i)-r(l_{c_\mbf{x}})-r(l_{j_\mbf{x}})]$. \end{prop} \begin{proof} First, we take into consideration the result of Proposition \ref{prop:pac-bound-mislab-mat} and deduce that $\psi_{\mbf{P}} \leq \E_{\mbf{X}} [(\hat\alpha(\mbf{x}_i)+r(l_{c_\mbf{x}}))/(\hat\delta(\mbf{x}_i)-r(l_{c_\mbf{x}})-r(l_{j_\mbf{x}}))]$. The rest of proof is similar to those are given for Proposition \ref{prop:pac-bound-mislab-mat} and for Proposition \ref{prop:pac-bayes-bound-first-moment}. \end{proof}
\section{Additional Experiments} \subsection{Approximation of the Posterior Probabilities for Self-learning} \label{sec:posterior-estimation}
In this section, we analyze the behavior of \texttt{MSLA} depending on how the transductive bound given by Eq.~\eqref{eq:tr-bound-joint-bayes-multi} is evaluated. Since the posterior probabilities for unlabeled data are not known, we have proposed to estimate them as the votes of the base supervised classifier learned using the labeled data only (Sup. Estimation). This approach has been used in Section \ref{sec:num-exper} for running \texttt{MSLA}. We compare it with another strategy that is to assign $P(Y=i|\mbf{X}=\mbf{x})=1/K,\ \forall \mbf{x}\in\mathrm{X}_{\sss\mathcal{U}},\ \forall i\in\{1,\dots,K\}$. In this case, we consider the worst case when every class is equally probable for each example (Unif. Estimation). Finally, we provide the performance of \texttt{MSLA} when the labels of unlabeled data are given, which means that the transductive bound is truly estimated (Oracle). Table \ref{tab:tr-bound-prob-estim} illustrates the performance results. As we can see, the supervised approximation generally outperforms the uniform one (significantly on \texttt{MNIST}). This might be explained by the fact that the supervised votes may give some additional information on the most probable labels for each example. In addition, we have observed that on the last iterations the votes of \texttt{MSLA} tend to be biased, so such posteriors can play a role of regularization. The performance results of the oracle show that better estimation of the posteriors can give an improvement, though not significantly on most of data sets. Note that the performance of the oracle is not perfect, because the true labels are used only for the bound estimation, and the votes are used for pseudo-labeling.
\label{sec:exp-prob-estim} \begin{table}[h]
\centering
{\scalebox{0.95}{
\begin{tabular}{c|ccc}
\toprule
\multirow{2}{*}{Data set} & \multicolumn{3}{c}{\texttt{MSLA}} \\
\cline{2-4}
& Unif. Estimation & Sup. Estimation & Oracle \\
\midrule
\texttt{Vowel} & .586 $\pm$ .029 & .586 $\pm$ .026 & .599 $\pm$ .028 \\
\midrule
\texttt{Protein} & .773 $\pm$ .034 & .781 $\pm$ .034 & .805 $\pm$ .036 \\
\midrule
\texttt{DNA} & .697 $\pm$ .079 & .702 $\pm$ .082 & .721 $\pm$ .09 \\
\midrule
\texttt{Page Blocks} & .965 $\pm$ .002 & .966 $\pm$ .002 & .966 $\pm$ .002 \\
\midrule
\texttt{Isolet} & .869 $\pm$ .015 & .875 $\pm$ .014 & .885 $\pm$ .012 \\
\midrule
\texttt{HAR} & .852 $\pm$ .025 & .854 $\pm$ .026 & .856 $\pm$ .022 \\
\midrule
\texttt{Pendigits} & .873 $\pm$ .024 & .884 $\pm$ .022 & .892 $\pm$ .016 \\
\midrule
\texttt{Letter} & .716 $\pm$ .013 & .717 $\pm$ .013 & .723 $\pm$ .012 \\
\midrule
\texttt{Fashion} & .722 $\pm$ .022 & .723 $\pm$ .023 & .728 $\pm$ .024 \\
\midrule
\texttt{MNIST} & .834 $\pm$ .016 & .857 $\pm$ .013 & .87 $\pm$ .012 \\
\midrule
\texttt{SensIT} & .722 $\pm$ .021 & .722 $\pm$ .021 & .722 $\pm$ .021 \\
\bottomrule
\end{tabular}}}
\caption{The performance comparison of \texttt{MSLA} depending on how the posterior probabilities are estimated in the evaluation of the transductive bound (Eq. \eqref{eq:tr-bound-joint-bayes-multi}).}
\label{tab:tr-bound-prob-estim} \end{table}
\subsection{Time} \label{sec:run-time}
In this section, we present the run-time of all the algorithms empirically compared in Section \ref{sec:msla-exp}. The results are depicted in Table\ref{tab:computationTime}. In general, the obtained run-time is coherent with the complexity analysis presented in Section \ref{sec:msla-exp}. \texttt{LS} and \texttt{QN-S3VM} have a very large run-time when they converge slowly, and they are generally slower than the other algorithms. \texttt{Semi-LDA} is fast on the considered data sets, though it may slow down on data of large dimension not considered in this paper.
It can be seen that \texttt{DAS-RF} is slower than the self-learning algorithms, which is due to the fact that the classifier is trained on all labeled and unlabeled examples at each iteration. \texttt{CSLA} is the fastest approach since it re-trains the base classifier only 3 times compared to 10 times for \texttt{FSLA}. From our observation, \texttt{MSLA} needs usually around 3-5 iterations to pseudo-label the whole unlabeled set, but it takes more time than \texttt{CSLA}, since it searches at each iteration the threshold by minimizing the conditional Bayes error. We have implemented the search in a single core, but it can be potentially parallelized. Nevertheless, the \texttt{MSLA} still runs fast.
\begin{table}[ht!] \caption{The average run-time of the learning algorithms under consideration on the data sets described in Table \ref{tab:data set-description}. $s$~stands for seconds, $m$ for minutes and $h$ for hours.} \label{tab:computationTime}
\break \centering \scalebox{0.95} {
\begin{tabular}{l| cccccccc}
\toprule
Data set &
\texttt{RF} & \texttt{LS} & \texttt{QN-S3VM} & \texttt{Semi-LDA} & \texttt{DAS-RF} & \texttt{FSLA$_{\theta=0.7}$} &
\texttt{CSLA$_{\Delta=1/3}$} & \texttt{MSLA}\\
\midrule
\texttt{Vowel} & 1\,s & 6\,s & 2\,s & 3\,s & 7\,s & 11\,s & 2\,s & 5\,s\\
\midrule
\texttt{Protein} & 1\,s & 22\,s & 4\,m & 5\,s & 6\,s & 10\,s & 2\,s & 4\,s\\
\midrule
\texttt{DNA} & 1\,s & 1\,m & 26\,s & 1\,s & 9\,s & 7\,s & 3\,s & 4\,s\\
\midrule
\texttt{PageBlocks} & 1\,s & 2\,m & 2\,m & 14\,s & 9\,s & 12\,s & 3\,s & 6\,s\\
\midrule
\texttt{Isolet} & 1\,s & 1\,m & 1\,h & 10\,s & 38\,s & 16\,s & 5\,s & 28\,s\\
\midrule
\texttt{HAR} & 1\,s & 18\,m & 32\,m & 3\,s & 42\,s & 23\,s & 6\,s & 13\,s\\
\midrule
\texttt{Pendigits} & 1\,s & 30\,m & 10\,m & 37\,s & 13\,s & 13\,s & 3\,s & 14\,s\\
\midrule
\texttt{Letter} & 1\,s & 3\,h & 40\,m & 1\,m & 20\,s & 16\,s & 5\,s & 1\,m\\
\midrule
\texttt{Fashion} & 1\,s & $>$4\,h & $>$4\,h & 1\,m & 2\,m & 1\,m & 29\,s & 1\,m\\
\midrule
\texttt{MNIST} & 1\,s & $>$4\,h & $>$4\,h & 1\,m & 2\,m & 1\,m & 29\,s & 1\,m\\
\midrule
\texttt{SensIT} & 1\,s & $>$4\,h & $>$4\,h & 2\,m & 3\,m & 2\,m & 30\,s & 1\,m\\
\bottomrule \end{tabular}} \end{table}
\subsection{Relaxation of CBIL} \label{sec:relax_bound}
The proposed \eqref{eq:w-cbound} is based on Eq. \eqref{eq:one-x-mislabel-ineq}, which holds only when $\delta(\mbf{x})\geq 0$. As it was discussed in Section \ref{sec:mislab-error-model}, Eq. \eqref{eq:one-x-mislabel-ineq} can be relaxed by adding some $\lambda>0$ leading to Eq. \eqref{eq:one-x-mislabel-ineq-with-lam}. In practice, it not only can make the bound computable, but also make it smoother, since arbitrarily small values of $\delta(\mbf{x})$ implies arbitrarily large values of $\hat{r}(\mbf{x})/\delta(\mbf{x})$. The latter should be avoided if \eqref{eq:w-cbound} is used as some optimization or selection criterion.
In this section, we study the impact of $\lambda$ on the bound's value on different data sets. In Figure \ref{fig:cbil-lams}, we display the results of all 20 experimental trials for \texttt{HAR}, \texttt{Isolet}, \texttt{Letter}, \texttt{MNIST} and \texttt{Fashion} when $\lambda\in[0.1, 0.2, \dots, 1]$. One can observe that when the bound is not penalized much (i.e., $\delta(\mbf{x})$ is far from 0), then the increase of $\lambda$ makes the bound looser, so $\lambda=0.1$ is the tightest choice. Exactly the opposite situation is observed when $\delta(\mbf{x})$ is small (trials 4 and 14 for \texttt{Letter}, most of trials for \texttt{Fashion}): higher values of $\lambda$ diminish the influence of hyperbolic weights $1/\delta(\mbf{x})$, so $\lambda=1$ leads to the tightest bound.
\begin{figure}
\caption{The value of \eqref{eq:w-cbound} with different $\lambda$ over 20 different labeled/unlabeled splits of 5 data sets.}
\label{fig:cbil-lams}
\end{figure}
We also note that small $\delta(\mbf{x})$ not only makes the bound looser, but also leads to poor correlation with the true error. It can particularly be seen in Figure \ref{fig:fashion-cbound}, where we repeated the experiment done in Section \ref{sec:cbil-exp} for the \texttt{Fashion} data set when $\lambda=0$ and $\lambda=0.1$. It is clearly seen that with $\lambda=0.1$ the curve's shape becomes much more similar to the oracle C-bound. Eventually, in average, $\lambda$ can make the bound looser but better correlated with the true error, where the latter is more important for practical applications.
\begin{figure}
\caption{\eqref{eq:w-cbound} and Oracle C-Bound when varying the number of unlabeled examples used for evaluation on Fashion data set. We keep the most confident one (with respect to prediction vote) from $20\%$ to $100\%$.}
\label{fig:fashion-cbound}
\end{figure}
\end{document} | arXiv |
Jo Ellis-Monaghan
Joanna Anthony Ellis-Monaghan is an American mathematician and mathematics educator whose research interests include graph polynomials and topological graph theory. She is a professor of mathematics at the Korteweg-de Vries Institute for Mathematics of the University of Amsterdam.
Education and career
Ellis-Monaghan grew up in Alaska.[1] She graduated from Bennington College in 1984 with a double major in mathematics and studio art, and earned a master's degree in mathematics from the University of Vermont in 1986. After beginning a doctoral program at Dartmouth College, she transferred to the University of North Carolina at Chapel Hill, where she completed her Ph.D. in 1995.[2] Her dissertation, supervised by Jim Stasheff, was A unique, universal graph polynomial and its Hopf algebraic properties, with applications to the Martin polynomial.[2][3]
She joined the Saint Michael's College faculty in 1992,[2] chaired the department there,[1] and has also held positions at the University of Vermont.[2] In 2020 she became professor of Discrete Mathematics at the University of Amsterdam.[4]
Contributions
With Iain Moffat, Ellis-Monaghan is the author of the book Graphs on Surfaces: Dualities, Polynomials, and Knots (Springer, 2013).[5]
From 2010-2020, she served as Editor-in-Chief of PRIMUS, a journal on the teaching of undergraduate mathematics.[6]
References
1. "Jo Ellis-Monaghan, PhD: Mathematics Department Chair, Professor of Mathematics", Get to Know Us, Saint Michael's College, retrieved 2017-12-10
2. Curriculum vitae, 2013, retrieved 2017-12-10
3. Jo Ellis-Monaghan at the Mathematics Genealogy Project
4. Joanna Ellis-Monaghan appointed professor of Discrete Mathematics, University of Amsterdam, 1 October 2020, retrieved 2020-12-18
5. Reviews of Graphs on Surfaces:
• Traldi, Lorenzo, Mathematical Reviews, MR 3086663{{citation}}: CS1 maint: untitled periodical (link)
• Banks, Jessica, zbMATH, Zbl 1283.57001{{citation}}: CS1 maint: untitled periodical (link)
• Berg, Michael (October 2013), "Review", MAA Reviews, Mathematical Association of America
6. "Editorial board", PRIMUS, Taylor & Francis, retrieved 2017-12-10
External links
• Home page
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
Academics
• Association for Computing Machinery
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
Other
• IdRef
| Wikipedia |
How many positive integers less than $555$ are either a perfect cube or a perfect square?
The largest perfect square less than $555$ is $23^2=529$. Therefore, there are $23$ perfect squares less than $555$.
The largest perfect cube less than $555$ is $8^3=512$. Therefore, there are $8$ perfect cubes less than $555$.
However, we cannot simply add those two numbers together because there are numbers that are both a perfect cube and a perfect square. For a number to be both a perfect square and perfect cube, it needs to be a $2 \cdot 3 =6$th power. The largest 6th power less than $555$ is $2^6=64$, so there are $2$ 6th powers less than $555$.
Therefore, there are $23+8-2=\boxed{29}$ integers that are either a perfect cube or perfect square. | Math Dataset |
Abstract: I have investigated signal recognition particle (SRP)-mediated protein targeting using a combination of genetic, biochemical, and molecular sequence analysis techniques. First, I consider the SRP cycle from the perspective of molecular evolution. This analysis provides insight into the significance of structural variation in SRP RNA and identifies novel conserved motifs in the polypeptide subunits of the particle. The conservation of SRP cycle components, combined with biochemical data from the mammalian, bacterial and yeast systems, suggests that this pathway for protein export is ancient in evolutionary origin. Next, I have used a variety of genetic and biochemical techniques to define the role of the Srp54p GTPase in the SRP cycle. Repressing synthesis of the essential Srp54 protein produces a growth defect that correlates with an accumulation of secretory precursors. I have also analyzed the effects of 17 site-specific mutations in the G domain of Srp54p. Several mutant alleles confer lethal and conditional phenotypes, indicating that GTP binding and hydrolysis are critical to the in vivo role of Srp54p. Enzymatic assays reveal that S. pombe Srp54p exhibits GTPase activity in vitro, while a mutant predicted to be catalytically defective has a reduced ability to hydrolyze GTP. Most importantly, the pattern of genetic dominance that these mutants display leads me to propose a model for the role of GTP hydrolysis by Srp54p during the SRP cycle, in which the SRP receptor $\alpha$ subunit (SR$\alpha$) serves as a GTPase activating protein (GAP) regulating signal sequence binding by the Srp54p subunit. Lastly, I have cloned and sequenced the S. pombe SR$\alpha$ gene. The gene encodes a 70 kDa protein that bears striking sequence similarity to the previously cloned mammalian and S. cerevisiae 70 kDa SR$\alpha$ proteins. The cloning of SR$\alpha$ opens the door to both exploring the biochemical effects of the SR$\alpha$ protein on the already existing Srp54p catalytically defective mutants and to genetically isolating the next downstream component in this complex GTPase cycle. | CommonCrawl |
The Banach algebras with generalized matrix representation
S. Barootkoob
Department of Mathematics, Faculty of Basic Sciences, University of Bojnord, P.O. Box 1339, Bojnord, Islamic Republic of Iran.
10.22072/wala.2020.122402.1273
A Banach algebra $\mathfrak{A}$ has a generalized Matrix representation if there exist the algebras $A, B$, $(A,B)$-module $M$ and $(B,A)$-module $N$ such that $\mathfrak{A}$ is isomorphic to the generalized matrix Banach algebra $\Big[\begin{array}{cc}
A & \ M \\
N & \ B%
\end{array}%
\Big]$.
In this paper, the algebras with generalized matrix representation will be characterized. Then we show that there is a unital permanently weakly amenable Banach algebra $A$ without generalized matrix representation such that $H^1(A,A)=\{0\}$.
This implies that there is a unital Banach algebra $A$ without any triangular matrix representation such that $H^1(A,A)=\{0\}$ and gives a negative answer to the open question of \cite{D}.
Banach algebra
idempotent
generalized matrix Banach algebra
[1] W.G. Bade, P.C. Curtis and H.G. Dales, Amenability and weak amenability for Beurling and Lipschitz algebras, Proc. Lond. Math. Soc., 55 (1987), 359-377.
[2] D. Bennis and B. Fahid, Derivations and the first cohomology group of trivial extension algebras, Mediterr. J. Math., 14(150) (2017), https://doi.org/10.1007/s00009-017-0949-z.
[3] G.F. Birkenmeier, J.K. Park and S.T. Rizvi, Extensions of Rings and Modules, Birkhauser, New York, 2013.
[4] J.M. Cohen, C*-Algebras without ldempotents, J. Funct. Anal., 33 (1979), 211-216.
[5] H.G. Dales, Banach Algebras and Automatic Continuity, vol. 24 of London Mathematical Society Monographs, The Clarendon Press, Oxford, UK, 2000.
[6] H.G. Dales and A.T.M. Lau, The Second Duals of Beurling Algebras, Memoirs of the American Mathematical Society, 2005.
[7] H.G. Dales, F. Ghahramani and N. Gr{o}nb{ae}k, Derivations into iterated duals of Banach algebras, Stud. Math., 128(1)(1998), 19-54.
[8] G.B. Folland, A Course in Abstract Harmonic Analysis, CRC Press, (1995).
[9] H. Lakzian and S. Barootkoob, Biprojectivity and biflatness of bi-amalgamated Banach algebras, Bull. Iran. Math. Soc., https://doi.org/10.1007/s41980-020-00366-w.
[10] A.T.-M. Lau, Analysis on a class of Banach algebras with applications to harmonic analysis on locally compact groups and semigroups, Fundam. Math., 118} (1983), 161-175.
[11] Y. Li and F. Wei, newblock Semi-centralizing maps of generalized matrix algebras, Linear Algebra Appl., 436(5) (2012), 1122-1153.
[12] M. Ramezanpour and S. Barootkoob, Generalized module extension Banach algebras: Derivations and weak amenability, Quaest. Math., (2017), 1-15
[13] A.D. Sands, Radicals and morita contexts, J. Algebra, 24 (1973), 335-345.
[14] Y. Zhang, Weak amenableility of module extension of Banach algebras, Trans. Am. Math. Soc., 354 (2002), 4131-4151.
[15] Y. Zhang, $2m-$Weak amenability of group algebras, J. Math. Anal. Appl., 396 (2012), 412-416.
Article View: 658
PDF Download: 520
Barootkoob, S. (2020). The Banach algebras with generalized matrix representation. Wavelet and Linear Algebra, 7(2), 23-29. doi: 10.22072/wala.2020.122402.1273
S. Barootkoob. "The Banach algebras with generalized matrix representation". Wavelet and Linear Algebra, 7, 2, 2020, 23-29. doi: 10.22072/wala.2020.122402.1273
Barootkoob, S. (2020). 'The Banach algebras with generalized matrix representation', Wavelet and Linear Algebra, 7(2), pp. 23-29. doi: 10.22072/wala.2020.122402.1273
Barootkoob, S. The Banach algebras with generalized matrix representation. Wavelet and Linear Algebra, 2020; 7(2): 23-29. doi: 10.22072/wala.2020.122402.1273 | CommonCrawl |
\begin{document}
\title{Topological Subsystem Codes} \author{H. Bombin} \affiliation{Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, 02139 USA\\ Perimeter Institute for Theoretical Physics, 31 Caroline St. N., Waterloo, Ontario N2L 2Y5, Canada}
\begin{abstract} We introduce a family of 2D topological subsystem quantum error-correcting codes. The gauge group is generated by 2-local Pauli operators, so that 2-local measurements are enough to recover the error syndrome. We study the computational power of code deformation in these codes, and show that boundaries cannot be introduced in the usual way. In addition, we give a general mapping connecting suitable classical statistical mechanical models to optimal error correction in subsystem stabilizer codes that suffer from depolarizing noise. \end{abstract}
\pacs{03.67.Pp}
\maketitle
\section{Introduction}
Quantum error correction \cite{Shor_QEC, Steane_QEC, Knill_QEC, Bennett_QEC} and fault-tolerant quantum computation \cite{Shor_FTQC, Knill_FTQC, Aharonov_FTQC, Gottesman_FTQC, Preskill_FTQC} promise to allow almost perfect storage, transmission and manipulation of quantum information. Without them, quantum information processing would be doomed to failure due to the decoherence produced by interactions with the environment and the unavoidable inaccuracies of quantum operations.
The key concept in quantum error correction is the notion of quantum code. This is a subspace of a given quantum system where quantum information can be safely encoded, in the sense that the adverse effects of noise can be erased through an error correction procedure. In practice this procedure is also subject to errors and thus it should be as simple as possible to minimize them. Naturally, the meaning of `simple' will depend on particular implementations. A common situation is that interactions are restricted to quantum subsystems that are close to each other in space. In those cases, the locality of the operations involved in error correction becomes crucial.
The stabilizer formalism \cite{Gottesman_stabilizer, Calderbank_stabilizer} provides a unified framework for many quantum codes. In stabilizer codes the main step for error correction is the measurement of certain operators, which may be local or not. A class of codes where these measurements are intrinsically local is that of topological stabilizer codes \cite{Kitaev_nonAbelian, Dennis_TQM, Bombin_CC2d, Bombin_CC3d}. In a different direction, locality can also be enhanced by considering more generally stabilizer subsystem codes \cite{Bacon_3d, Poulin_subsystem}. The present work provides an example of a family of codes which can be labeled both as `topological' and `subsystem'.
Topological codes where originally introduced with the goal of obtaining a self-protecting quantum computer \cite{Kitaev_nonAbelian}. This idea faces important difficulties in low dimensions, since thermal instabilities are known to occur \cite{Dennis_TQM, Alicki_stability2d, Alicki_stability4d}. On the other hand, topological codes are local in a natural way and have very interesting features in the context of active error correction. For example, they do not only allow to perform operations transversally \cite{Bombin_CC2d, Bombin_CC3d} but also through code deformations \cite{Dennis_TQM, Bombin_deformation}. Moreover, there exist a useful connection between error correction in topological codes and certain classical statistical models \cite{Dennis_TQM, Wang_topoStat, Katzgraber_3body}.
Stabilizer subsystem codes are the result of applying the stabilizer formalism to operator quantum error correction \cite{Kribs_OQEC}. In subsystem codes part of the logical qubits that form the code subspace are no longer considered as such but, rather, as gauge qubits where no information is encoded. This not only allows the gauge qubits to absorb the effect of errors, but has interesting consequences for error correction. It may allow to break up each of the needed measurements in several ones that involve a smaller number of qubits \cite{Bacon_3d, Poulin_subsystem}. An example of this is offered by Bacon-Shor codes \cite{Bacon_3d}, in which the basic operators to be measured can have support on an arbitrarily large number of qubits, yet their eigenvalues can be recovered from 2-local measurements that do not damage encoded information. Moreover, the pairs of qubits to be measured together are always neighbors in a 2D lattice. Thus, subsystem codes can have very nice locality properties.
The 2D topological subsystem codes introduced here show all the characteristic properties of topological codes and at the same time take profit of the advantages of subsystem codes. Some of them are: \begin{itemize} \item The codes are local in a 2D setting, which can be flat. \item The measurements needed for error correction only involve two neighboring qubits at a time, as in Bacon-Shor codes. This is an important advantage with respect to other topological codes, such as surface codes, that require measuring groups of at least 4 qubits. \item Most errors of length up to $cn$ are correctable, where $c$ is some constant and $n$ the number of physical qubits. This feature, common to topological codes, follows from the fact that logical operators are related to strings that wind nontrivially around the surface where the code is defined. \item Error correction must be done only `up to homology', an important simplification that allows the introduction of specific tools. \item One can naturally perform certain logical gates through `deformations' of the code. This feature, however, is less powerful that in other topological codes because boundaries cannot be introduced in the usual way. \end{itemize}
Since these codes are topological, it is natural to expect a connection between their correction procedures and suitable classical statistical models. However, the mapping between surface codes and random Ising models \cite{Dennis_TQM} makes stronge use of their CSS structure \cite{Calderbank_CSS, Steane_CSS}, and the same is true for the one between color codes and 3-body random Ising models \cite{Katzgraber_3body}. The CSS structure makes it possible to completely separate the correction of phase flip and bit flip errors, making the problem classical and enabling the connection. Indeed, there exist similar mappings from classical codes to statistical models \cite{Nishimori_statInfo}. Fortunately, as explained in section VI, the approach can be generalized even in the absence of this separation. Moreover, the subsystem structure is also compatible with the approach, so that it can be applied to the family of codes of interest.
The paper is organized as follows. Sections II and III go over several aspects of quantum error correction and topological codes, respectively, setting up a framework for the rest of the paper. Section IV introduces the family of topological subsystem codes and presents a thorough study of their properties. Section V offers the construction of a general mapping between error correction in subsystem codes and classical statistical models. Section VI is devoted to conclusions.
\section{Stabilizer quantum error-correcting codes} \label{sec:subsystem}
This section summarizes the notions of quantum error correction that will be needed in the rest of the paper. It mainly reviews stabilizer codes, both in the subspace and the more general subsystem formulation. Ideal error correction procedures and their success probability are also considered.
\subsection{Quantum error correction}
Quantum error correction deals with the preservation of quantum information in the presence of noise. Both the noise $\channel$ and the error recovery $\correction$ are modeled as quantum operations or channels $\funcion { \channel, \correction }{\ban (\hilb)}{\ban (\hilb)}$, where $\ban(\hilb)$ is the space of linear operators on $\hilb$, the Hilbert space associated to the quantum system under consideration. Such maps can always be expressed in the operator-sum representation. For example, the noise is $\channel (\rho)=\sum_i E_i\rho E_i^\dagger$ for some $E_i\in\ban (\hilb)$, which will be denoted by $\channel=\sset{E_i}$.
In the original formulation of quantum error correction \cite{Shor_QEC, Steane_QEC, Knill_QEC, Bennett_QEC}, quantum information is encoded in a subspace of $\hilb$, the code subspace $\code\subset \hilb$. The system undergoes a noisy process $\channel$ and afterwards an error recovery operation $\correction$ is performed. Then, given a code $\code$, a noise source $\channel$ is said to be correctable if there exists a recovery operation $\correction$ such that $\correction\circ\channel (\rho)=\rho$ for any state $\rho \in \ban (\code)$.
More generally, in the operator quantum error correction formalism \cite{Kribs_OQEC}, information is encoded in a subsystem $\suba$, with $\code =\suba\otimes\subb$. Whatever happens to subsystem $\subb$ is irrelevant. That is, error recovery is possible for a quantum channel $\channel$ if there exists a recovery operation $\correction$ such that for any $\rho^\suba\in \ban(\suba)$ and $\rho^\subb\in \ban(\subb)$ it gives $\correction\circ \channel (\rho^\suba\otimes\rho^\subb)=\rho^\suba\otimes\rho^{\prime\subb}$ for some arbitrary $\rho^{\prime\subb}$.
The sufficient and necessary condition for the noise process $\channel =\sset{E_i}$ to be correctable \cite{Knill_QEC, Bennett_QEC, Kribs_OQEC} is that $PE_i^\dagger E_j P=\b 1^\suba\otimes g_{ij}^B$ for every $i$ and $j$, with $P$ the projector onto the code subspace. When this condition holds, the set of errors $\sset{E_i}$ is said to be correctable. Since adding a linear combination of the $E_a$ to the set does not change correctability, it is natural to consider correctable sets of errors as linear subspaces and to choose the most convenient operator basis. Generally the quantum system is composed of $n$ qubits, $\hilb \simeq (\mathbf{C}^2)^{\otimes n}$, and error operators are chosen to be Pauli operators, elements of the Pauli group $\pauli_n :=\langle i\b 1, X_1,Z_1,\dots, X_n, Z_n \rangle$. Here $X_i$, $Z_i$ are as usual the Pauli operators on the $i$-th qubit, $X=\ket 0\bra 1+ \ket 1 \bra 0$, $Z=\ketbra 0 - \ketbra 1$ in the orthonormal basis $\sset{\ket 0, \ket 1}$.
Usually error models are such that errors which affect more qubits are less likely to happen. Then it makes sense to correct as many errors as possible among those that have support on (act nontrivially on) a smaller number of qubits. The weight $|E|$ of a Pauli operator $E\in \pauli_n$ is defined as the number of qubits that form its support. When a code can correct all Pauli errors $E$ with $|E|\leq r$ it is said to correct $r$ errors.
\subsection{Stabilizer subspace codes}
A formalism that has been particularly successful for the development of quantum codes is the stabilizer formalism \cite{Gottesman_stabilizer, Calderbank_stabilizer}, in which the code $\code$ is described in terms of an Abelian subgroup $\gauge=\langle G_j \rangle \subset \pauli_n$ such that $-1 \not\in \stab$. Take the generators $S_j$ to be independent and let $s$ be the rank of $\stab$. The $n$ qubit Hilbert space $H$ can be partitioned according to the eigenvalues of the $S_j$ into $2^s$ isomorphic orthogonal subspaces $H=\bigoplus_{\vect s} \code_{\vect s}$. Here $\vect s=(s_j)$ is the error syndrome, with $s_j=\pm 1$ the eigenvalue of $S_j$. By convention, the code subspace $\code$ is that with $s_j=1$ for all $j$. It has dimension $2^k$, with $k=n-s$ the number of encoded or logical qubits. The reason to call $\vect s$ the error syndrome is that in can be obtained by measuring the $S_j$ and then used to infer which errors have produced.
It is easy to introduce a Pauli group for the $k$ logical qubits. Let $N(\stab)$ be the normalizer of $\stab$ in $\pauli_n$. Its elements are the Pauli operators that map the subspaces $C_{\vect s}$ onto themselves, and the quotient group $N(\stab)/\stab$ is isomorphic to $\pauli_k$. The logical Pauli operators are then generated by $\hat X_1, \hat Z_1,\dots, \hat X_k,\hat Z_k \in N(\stab)$, some chosen representatives of the images of $X_1, Z_1, \dots,X_k,Z_k\in\pauli_k$ under a given isomorphism.
It is also possible to characterize a stabilizer code with a pair $(U,s)$, where $U$ is an automorphism of $\pauli_n$ and $s$ an integer, $0\leq s\leq n$. Let $\tilde X_i$, $\tilde Z_i$ denote the images of $X_i$, $Z_i$ trough $U$. Then the stabilizer is $\stab = \langle \tilde Z_1,\dots,\tilde Z_s \rangle$. This approach directly provides a choice for encoded Pauli operators: $\hat X_1:= \tilde X_{s+1}, \hat Z_1:=\tilde Z_{s+1}, \dots, \hat X_k:=\tilde X_n, \hat Z_k:= \tilde Z_n$.
Pauli errors $E$ have a specially simple effect on the encoded states, as they map the subspaces $C_{\vect s}$ one onto another. Set $\vect s_E=\vect s$ when $ES_j E^\dagger= s_j S_j$. Then a Pauli error $E$ maps $\code$ onto $\code_{\vect s_E}$. Pauli operators are divided in three categories. The elements of $\pauli_n-N(\stab)$ map the code to other subspaces and are termed detectable errors, as their effect can be detected by measuring the operators $\sset {S_j}$. The elements of $N(\stab)-\stab^\prime$, with $\stab':=\langle i\b 1 \rangle \stab $, map in a nontrivial way the code to itself and are thus called undetectable errors. Finally, the elements of $\stab^\prime$ have no effect on encoded states $\rho\in\ban (\code)$. The distance $d$ of the code is defined as the minimum weight among undetectable errors. It determines the number of corrected errors, which is $\lfloor (d-1)/2\rfloor$. A code of $n$ qubits that encodes $k$ qubits and has distance $d$ is denoted $[[n,k,d]]$.
\subsection{Stabilizer subsystem codes}
The stabilizer formalism can also be used in the context of operator quantum error correction \cite{Bacon_3d, Poulin_subsystem}. Instead of being characterized by a stabilizer group, subsystem stabilizer codes are almost determined by a subgroup $\gauge\subset \pauli_n$, called the gauge group, such that $i\b 1\in \gauge$. Almost because, in addition, a stabilizer group $\stab$ has to be chosen such that $\stab'$, as defined above, is the center of $\gauge$. There are different choices for $\stab$ because the sign of some of its generators can always be flipped. This amounts to different choices for $\code$ in the decomposition $H=\bigoplus_{\vect s} \code_{\vect s}$.
The idea after the introduction of the gauge group is that gauge operations should not affect the encoded information. This forces to identify states such as $\rho$ and $G_j\rho G_j^\dagger$ as equivalent, giving rise to a subsystem structure $C_{\vect s}=A_{\vect s} \otimes B_{\vect s}$. The decomposition is such that the gauge operators $G_j$ act trivially in the $A_{\vect s}$ subsystems and generate the full algebra of operators of the $B_{\vect s}$ subsystems. Set $C=A\otimes B$, with $A$ the logical subsystem where information is encoded and $B$ the gauge subsystem that absorbs the effect of gauge operations. Since $\gauge/\stab\simeq \pauli_r$ for some $r$, $B$ consists of $r$ qubits. Similarly, the Pauli operators for the $k$ logical qubits are recovered from the isomorphism $N(\gauge)/\stab\simeq \pauli_k$, and $k+r+s=n$.
It is also possible to characterize a stabilizer subsystem code as a triplet $(U,s,r)$, where $U$ is an automorphism of $\pauli_n$ and $s,r\geq 0$ are integers with $r+s \leq n$. Using the same notation as above, the stabilizer and gauge groups are $\stab = \langle \tilde Z_1,\dots,\tilde Z_s \rangle$ and $\gauge = \langle i\b 1, \tilde Z_1, \dots, \tilde Z_{s+r}, \tilde X_{s+1},\dots,\tilde X_{r}\rangle$. The chosen logical Pauli operators are $\hat X_1:= \tilde X_{s+r+1}, \hat Z_1:=\tilde Z_{s+r+1}, \dots, \hat X_k:=\tilde X_n, \hat Z_k:= \tilde Z_n$.
In subsystem codes detectable Pauli errors are the elements of $\pauli_n-N(\stab)$ and undetectable ones are those in $N(\stab)-\gauge$. Undetectable errors are directly related to logical Pauli operators. Indeed, $N(\stab)/\gauge\simeq N(\gauge)/\stab^\prime$, through the following correspondence. For any $E\in N(\stab)$ there exists a $G\in \gauge$ such that $EG\in N(\gauge)$, and if $G'\in \gauge$ is such that $EG^\prime\in N(\gauge)$ then $GG^\prime\in \stab^\prime=\gauge\cap N(\gauge)$. The distance $d$ of the code is defined as for subspace codes and has the same implications regarding error correction. A subsystem code of $n$ qubits that encodes $k$ qubits and has $r$ gauge qubits and distance $d$ is denoted $[[n,k,r,d]]$.
\subsection{Syndrome measurement}
An interesting property of stabilizer subsystem codes is that they may allow an easier measurement of the stabilizer generators. This is so because it is possible to substitute the direct measurement of an stabilizer element $S$ by an indirect one, in which $t$ self-adjoint gauge operators $G_i$ such that $S=G_1\cdots G_t$ are measured. It may be the case that the $G_i$ have a smaller weight than $S$. For example, in the family of Bacon-Shor codes the gauge generators $G_i$ have always weight $|G_i|=2$ but the smallest stabilizer generators have an arbitrarily large weight. Such cases offer two important advantages. On the one hand, the smaller the weight of a Pauli operator, the simpler the operations needed to measure it. This is specially relevant in fault-tolerant quantum computing, where error correction is considered a faulty process in itself, because simpler operations imply less errors. Secondly, it may be possible to measure the $G_i$ in parallel, with the corresponding saving of time. This is again relevant for fault tolerance, where the ability to perform measurements faster entails less errors.
Since the $G_i$ need not commute the ordering of the measurements is relevant. In general, the ordered measurement of a collection of $t$ operators $E_1, \dots, E_t\in \pauli_n^\dagger$ yields the effective measurement of an abelian group of self-adjoint Pauli operators $\mathcal M\subset \mathcal Z$, with $\mathcal Z$ the center of $\langle-\b 1, E_1, \dots, E_t\rangle$. In particular, $\mathcal M=\mathcal N\cap\mathcal Z$ with $\mathcal N$ the abelian group of those self-adjoint Pauli operators with eigenvalues fixed by the sequence of measurements. $\mathcal N$ can be computed iteratively, since adding an additional measurement $E_{t+1}$ changes $\mathcal N$ to $\mathcal N^\prime= \langle E_{t+1}\rangle \cdot \mathcal (\mathcal N\cap N(E_{t+1})) $
\subsection{Error correction}\label{sec:error_correction}
Even if a noisy channels $\channel$ is not correctable for a given subsystem code, there exists some probability of performing a successful error recovery. For example, if each of the physical qubits that compose a code undergoes a depolarizing channel $\sset{(1-3p)^\frac 1 2\b 1, p^\frac 1 2 X, p^\frac 1 2 Y, p^\frac 1 2 Z}$, then the noise is certainly not correctable, but the success probability can still be close to one. This section quantifies this probability, which is of primary importance for topological codes \cite{Dennis_TQM}.
Some notation is needed here. Set as equivalent $E\sim E^\prime$ those operators $E,E^\prime\in\ban(\hilb)$ that have the same action up to gauge elements, $E= E^\prime G$ for some $G\in \gauge$. The corresponding equivalence classes will be denoted $\bar E$. Let $\sset{D_i}_{i=1}^{4k}\subset N(\stab)$ be a particular set of representatives for $N(\stab)/\gauge$, taking in particular $D_1=\b 1$. The $D_i$ with $i> 1$ will represent the ways in which error correction can fail. For example, if there is one encoded qubit a choice is $\sset{\b 1, \hat X_1, \hat Z_1, \hat X_1\hat Z_1}$. To exted the equivalence of operators to channels, we choose the minimal equivalence relation such that $\sset{E_i}\sim\sset{E^\prime_i}$ whenever $E_i\sim E_i^\prime$ for all $i$.
Assume an error model in which Pauli errors $E\in \pauli_n$ occur with a given probability $p(E)$. That is, $\channel = \sset{p(E)^{\frac 1 2} E}_{E\in\pauli_n}$. Errors with different phases will not be distinguished when discussing $p(E)$ because phases are irrelevant. Up to gauge operations the error channel is $\channel \sim \sset{p(\bar E)^{\frac 1 2} E}_{\bar E\in\pauli_n/\gauge}$, where $E$ denotes any chosen element of $\bar E$ and $p(\bar E)=\sum_{G\in\gauge/\langle i\b 1 \rangle} p(EG)$ is the probability for a given class of errors to happen. This makes already apparent that class probabilities $p(\bar E)$ are more important than individual error probabilities $p(E)$.
Error recovery starts with the measurement of the stabilizer generators $S_j$. This yields the error syndrome $\vect s$, which limits the errors $E$ that have possibly happened to those with $\vect s_E=\vect s$. These possible errors are arranged into different classes, which may be labeled by choosing any possible error $E$ and taking as representatives the elements $\sset{ED_i}$. Then the conditional probability for the class of errors $\bar E$ to have happened given the syndrome outcome $\vect s$ is \begin{equation} \label{conditional_probability}
p(\bar E|\vect s) = \frac{p(\bar E)}{\sum_i p(\bar E\bar D_i)}. \end{equation}
Suppose that these conditional probabilities can be computed, which may be potentially difficult due to the combinatorics. Then the class $\bar E=\bar E_{\vect s}$ that maximizes $p(\bar E|\vect s)$ is known and the optimal recovery operation is $\correction = \sset{E_{\vect s} P_{\vect s}}_{\vect s}$, where $P_{\vect s}$ is the projector onto the subspace $C_{\vect s}$. The combined effect of errors and recovery is $\correction\circ \channel\sim \sset{p_i^\frac 1 2 D_i}_i$ for some probabilities $p_i$ that only depend on the error distribution $p(\bar E)$. This gives a success probability for the error recovery \begin{equation} \label{success} p_0=\sum_{\vect s} p(\bar E_{\vect s}). \end{equation}
A bad feature of the expression \eqref{success} is that it depends on $E_{\vect s}$. Consider an alternative error correction procedure where the class of errors with the maximum probability $\bar E_{\vect s}$ is not always chosen. Instead, an operator from a class $\bar E$ is applied with probability $p(\bar E| \vect s)$, so that $\correction^\prime = \sset{p(\bar E|\vect s)^\frac 1 2 E P_{\vect s}}_{\vect s, \bar E}$. The success probability for this randomized correction procedure is \begin{equation} \label{success_prime}
p^\prime_0=\sum_{\bar E} p(\bar E) \,p(\bar E | \vect s_E)=\sum_E p(E) \,p(\bar E | \vect s_E). \end{equation} This procedure is at best as successful as the original one, giving the bound $p_0^\prime\leq p_0$. Notice that $p_0=1$ if and only if $p_0^\prime=1$. It follows that $p_0=1$ if and only if for any $D\in N(\stab)-\gauge$ \begin{equation} \label{success_condition}
\sum_{E} p(E) \,p(\bar E\bar D | \vect s_E)=0. \end{equation} This was the condition used in \cite{Dennis_TQM} to characterize successful recovery.
\section{Topological stabilizer codes}
This section gathers together several aspects of topological stabilizer codes to provide a reference for section \ref{sec:subsystem_topological_codes}. The goal is to put the subsystem codes introduced there in a broader context, making apparent the similarities and differences with previously known local and topological codes.
\subsection{Local codes}
In the context of fault-tolerant quantum computing it is advantageous to be able to perform the syndrome measurements in a simple way. It may be desirable, for example, that the number of qubits that form the support of the operators to be measured is small. Similarly, it may be convenient that the qubits that form the code only belong to the support of a small number of such operators. This ideas can be formalized to give rise to the notion of local families of codes \cite{Kitaev_nonAbelian}. In particular, a family of stabilizer subspace codes $\sset{\code_i}$ is local when i/ it contains codes of arbitrary distance and ii/ there exist two positive integers $\mu, \nu$ such that for each $\code_i$ there exists a family of generators of the stabilizer ${S_j}$ such that ii.a/ $|S_j|\leq \mu$ and ii.b/ the number of $S_j$-s with support on any given qubit is smaller than $\nu$.
The drawback of such an abstract notion of locality is that it is not related in any way to a particular geometry. Many physical settings have qubits disposed in 1,2 or 3 spatial dimensions and only allow the direct interaction of nearby qubits. To reflect this fact, and without loss of generality regarding the nature of the lattices, the definition above may be modified as follows \cite{Bravyi_noGo}. First, the qubits of each code $\code_i$ are considered to be disposed in the vertices of some finite, periodic, cubic lattice of a given dimension $D$. Second, instead of ii/ there must exist a positive number $d$, independent of $\code_i$, such that the support of all the $S_j$ is contained in some cube containing $d^D$ vertices. A family of codes that is local in $D$ dimensions is always local in the previous sense.
A similar notion of local stabilizer subsystem codes may be defined by substituting the stabilizer with the gauge group. Notice that a family of subsystem codes might have local gauge generators but non-local stabilizer generators, as Bacon-Shor codes exemplify.
\subsection{Topological codes}\label{sec:topological_codes}
Topological stabilizer codes are constructed from lattices on a given manifold, in such a way that the generators of the stabilizer are local with respect to the lattice \cite{Kitaev_nonAbelian, Bombin_homology, Bombin_CC2d, Bombin_CC3d}. Typically the set of qubits and the set of generators are directly related to sets of geometric objects such as the vertices, links or faces of the lattice. In order to distinguish truly topological codes from merely local ones, we propose the following criterium. \emph{In a topological stabilizer code, any operator $O\in N(\stab)$ with support in a subset of a region composed of disconnected pieces, each of them simply connected, has trivial action on logical qubits}. Stated this way, it is a rather vague criterium since no formal definition of region or connectedness is given. However, it will be enough for our purposes by adopting a reasonable interpretation when needed. Fig.~\ref{fig:region} shows an example of a region in a torus that cannot be the support of an undetectable error.
\begin{figure}
\caption{ In this figure the geometry is that of a torus, with opposite sides of the square identified. The dark region can be contained in a disconnected collection of simply connected regions, and thus cannot be the support of a non-detectable error in a topological code.}
\label{fig:region}
\end{figure}
An enumeration of the common properties of known topological codes will be useful. First, the support of undetectable errors is topologically non-trivial in some well-defined way. Indeed, both in surface codes \cite{Kitaev_nonAbelian, Dennis_TQM, Bombin_homology} and color codes \cite{Bombin_CC2d, Bombin_CC3d} undetectable errors are related to homologically non-trivial cycles. Second, and closely related, the number of encoded qubits $k$ depends only upon the manifold in which the lattice is defined, and not the lattice itself. For example, for 2D surface and color codes the number of logical qubits is respectively $k=2-\chi$ and $k=4-2\chi$, with $\chi$ the Euler characteristic of the 2-manifold. Finally, an important property of topological codes is that their nature makes it possible to define them in many different lattices, typically rather arbitrary ones as long as some code dependent constraints are satisfied. In other words, topological codes display a huge flexibility. This is to be expected from constructions that only see the topology, not the geometry, of the manifolds to which they are related.
Notice that the locality of the stabilizer generators has been emphasized, with no mention to gauge generators. The reason is that, up to now, no genuinely subsystem topological codes have been known. This paper introduces a family of such codes. They have both local gauge generators and local stabilizer generators. Such locality properties should be expected from any topological subsystem code.
The family of Bacon-Shor codes \cite{Bacon_3d} provides an example of non-topological local gauge codes. These codes certainly do not satisfy the above criterium for any interpretation of connectedness that agrees with their 2D lattice geometry. Moreover, their geometry is completely rigid, in the sense that there is no clear way to generalize them to other lattices and manifolds.
It is interesting to observe that topological codes do not offer good $d/n$ ratios, which go to zero as larger codes are considered. For example, in two-dimensional surface or color codes $d=O(\sqrt n)$ holds (which is optimal among 2-dimensional local codes \cite{Bravyi_noGo}). But, as remarked in \cite{Kitaev_nonAbelian}, this is a misleading point because topological codes can correct most errors of order $O(n)$.
Finally, classical topological codes also exist \cite{Bombin_homology}. Unlike quantum ones, they can be obtained from mere graphs, that is, 1-dimensional objects.
\subsection{Topological quantum memories}\label{sec:topological_memory}
In \cite{Dennis_TQM} an interesting approach to the problem of indefinite preservation of quantum information was presented that makes use of topological codes. Since it will underly several discussions below, a brief summary is in order. The main idea is that information is encoded in a surface code and preserved by keeping track of errors. To this end, round after round of syndrome extractions must be performed. There are thus two sources of errors, since not only the code will suffer from storage errors but the stabilizer measurements themselves are also faulty. When the error rate is below certain threshold, the storage time can be made as long as desired by making the code larger, a feature that is only available in topological codes (for other codes one would have to use concatenation). Interestingly, this error threshold can be connected to a phase transition in a classical statistical model, a random 3D gauge model \cite{Dennis_TQM, Wang_topoStat}.
\subsection{Code deformation}
The flexibility of topological codes implies that they can be defined in many lattices. This feature makes natural the introduction of code deformations \cite{Dennis_TQM, Raussendorf_deformation, Bombin_deformation}, which are briefly described next.
When two codes are very similar, in the sense that they differ only by a few qubits, it is possible to transform one onto another by manipulating these few qubits. This is specially natural for local codes that only differ locally. In particular, such local code deformations will not compromise the protection of the encoded information. These ideas where first explored in \cite{Dennis_TQM}, where the geometry of a surface code is transformed step by step to compensate a change in the lattice geometry provoked by a transversal gate. In \cite{Dennis_TQM} code deformations are also used to initialize the code with an arbitrary state. This is done by `growing' the code from a single qubit encoding the desired state. Notice that in this case encoded information is not protected on the early stages of the code deformation, when the code is still small.
In general \cite{Bombin_deformation}, two main kinds of deformations may be distinguished: those which change the number of encoded qubits and those which do not. The formers can be used to initialize and measure encoded qubits, and the latter to perform operations on encoded qubits. In the case of topological codes, code deformations amount to changes in the geometry of a lattice, which may ultimately be understood as changes in the geometry of a manifold. When the topology of the manifold changes, initialization or measurement of encoded qubits will happen in a well-defined way \cite{Bombin_deformation}. When the manifold undergoes a continuous transformation that maps it to itself, a unitary operation is performed on the encoded qubits \cite{Bombin_deformation}. This unitary operation only depends on the isotopy class of the mapping.
Code deformation can be naturally integrated with the successive rounds of stabilizer measurements mentioned in the previous section. In particular, as long as the deformations are local, one can perform them simply by changing the stabilizers to be measured at each stage of error detection \cite{Bombin_deformation}.
\subsection{String operators} \label{sec:string_operators}
In this and subsequent sections we only consider 2-dimensional topological codes with qubits as their basic elements, because the topological subsystem codes introduced in section IV fall into this category. For the same reason, subsystem code language, as opposed to subspace, will be used.
In known 2D topological codes the logical Pauli operators $\hat X_1, \dots, \hat Z_k\in \mathbf{N}(\gauge)$ can be chosen to be string operators \cite{Kitaev_nonAbelian, Bombin_CC2d}. These are operators $O_s$ with support along a set of qubits $s$ that resembles a closed string. There are several types of strings, labeled as $\sset{l_i}$. Two strings $s,s^\prime$ of the same type that enclose a given region, like $a$, $b$ in Fig.~\ref{fig:strings}, give equivalent operators $O_sO_s^\prime\in \stab^\prime$. In other words, only the homology of the strings is relevant. In particular, boundary strings, those that enclose a region like $c$ in Fig.~\ref{fig:strings}, produce operators in $\stab^\prime$. Moreover, $\mathcal S^\prime$ is generated by boundary strings of some minimal regions. When two strings $s,s^\prime$ cross once, like $a$ and $d$ in Fig.~\ref{fig:strings}, $O_s$ and $O_{s^\prime}$ commute or not depending only on the labels of the strings. Finally, two strings $s$, $s^\prime$ with a common homology class can be combined in a single string $s^{\prime\prime}$ of a suitable type, in the sense that $O_sO_{s^\prime}O_{s^{\prime\prime}}\in \stab^\prime$. For example, $d$ and $e$ in Fig.~\ref{fig:strings} can be combined in a string $f$ of a suitable type.
\begin{figure}
\caption{ In this figure the geometry is that of a torus, with opposite sides of the square identified. The colored curves represent the support of string operators, with color standing for strings labels. The strings $a$ and $b$ enclose a region and thus are homologically equivalent, producing equivalent operators. The string $c$ encloses a region and thus is homologically trivial, producing a stabilizer element. The strings $d$, $e$ and $f$ are homologically equivalent but have different labels, producing inequivalent operators.}
\label{fig:strings}
\end{figure}
All this properties can be captured in a group $L^\prime \simeq \pauli_t$, for some $t$ that depens on the code. For example, $t=1$ for surface codes and $t=2$ for color codes. The group $L:=L^\prime/\langle i\b 1 \rangle$ has as its elements the string types $\sset {l_i}$ and the product corresponds to string combination. The comutation rules are recovered from $L^\prime$: crossing strings commute or not depending on whether their labels commute in $L^\prime$. There are $2t$ labels $\bar l_1, \dots, \bar l_{t}$ that generate $L$. In a given manifold $2-\chi$ nontrivial cycles that generate the homology group can be chosen. Each of them gives rise to $2t$ string operators with labels $l_i$, and the total $2t(2-\chi)$ string operators generate $N(\gauge)/\stab^\prime$, so that the number of encoded qubits is $k= t(2-\chi)$.
Instead of strings, in general one can consider string-nets, where the strings meet at branching points \cite{Bombin_CC2d}. The allowed branchings are those in which the product of all the labels involved is trivial. String-nets do not play a significant role in closed manifolds, but can be essential when the manifold has boundaries \cite{Bombin_CC2d}.
Let us check the criterium for topological codes of section \ref{sec:topological_codes} using the string operator structure. To this end a notion of connectedness is needed, but this can be obtained from the local generators of $\mathcal S^\prime$: two regions or sets of qubits are disconnected from each other if no local generator has support on both of them at the same time. Let $Q_O$ denote the support of an operator $O$. Then if $O\in N(\stab)$ and $Q_O=Q_1\sqcup Q_2$ with $Q_1$ and $Q_2$ disconnected it follows that $O=O_1O_2$ and $Q_1=Q_{O_1}$, $Q_2=Q_{O_2}$ for some $O_1,O_2\in N(\stab)$. As for simply connectedness, it is easier to introduce a wider notion of `trivial' region. A set of qubits $Q$ forms a trivial region when there exist string operators $O_i$ that generate $N(\gauge)/\stab^\prime$ and such that $Q_{O_i}\cap Q=\emptyset$. If $O\in N(\stab)$ is such that $Q_O$ is a trivial region then $[O,O_i]=0$ for the corresponding string generators $O_i$ and thus $O\in\gauge$. The criterium is satisfied within this language: if $O\in N(\stab)$ and $Q_O=Q_1\sqcup \dots\sqcup Q_t$ with the $Q_i$ pairwise disconnected and each of the $Q_i$ a trivial region, then $O\in\gauge$.
\subsection{Anyons}
Topological codes can be described in terms of string operators because they describe ground states of topologically ordered quantum models, that is, systems with emergent abelian anyons \cite{Kitaev_nonAbelian}. Anyons are localized quasiparticles with unusual statistics. String operator represent quasiparticle processes, and their commutation rules are directly related to the topological interactions of the anyons. Moreover, when an open-ended string operator is applied to the ground state, a pair of anyons is created on the ends of the string. The labels of the created anyons are those of the string, so that string labels are also quasiparticle labels.
From the perspective of the code quasiparticles correspond to error syndromes, signaling a chain of errors along the string \cite{Dennis_TQM}. Thus, keeping track of errors in a topological code, recall section \ref{sec:topological_memory}, amounts to keep track of the wordlines of these quasiparticles. Error correction will success if the wordlines are correctly guessed up to homology \cite{Dennis_TQM}.
\subsection{Boundaries}\label{sec:boundaries}
From a practical perspective, codes that are local in closed 2-manifold like a torus are not very convenient. Instead, one would prefer to have planar codes. Thus, a way to create nontrivial topologies in the plane is needed, and this is exactly what is gained by introducing boundaries.
In a given code, different types of boundaries are possible. To start with, one can always consider random, structureless boundaries. The introduction of such boundaries will typically produce many local encoded qubits along the boundary. But these qubits are unprotected, and thus essentially useless.
More interestingly, boundaries with well-defined properties and non-local encoded qubits are also possible \cite{Dennis_TQM, Bombin_CC2d}. The defining property of such boundaries is that strings $s$ with labels from certain subset $M\subset L$ are allowed to end in them, see Fig.~\ref{fig:boundary}, in the sense that $O_s$ belongs to the normalizer $N(\stab)$. In other words, the introduction of the boundary changes the notion of closed string, by allowing on the boundary loose ends of strings of suitable types. The notion of boundary string also changes. Two strings $s,s^\prime$ of the same type that, together with boundaries in which they can end, form the the boundary of a given region, as $a$ and $b$ in Fig.~\ref{fig:boundary}, produce equivalent operators so that $O_s O_s^\prime\in\stab^\prime$.
\begin{figure}
\caption{ This figure illustrates boundaries on 2D topological codes, which are displayed as dashed thick lines. The strings $a$ and $b$, together with the boundaries, enclose a region and thus produce equivalent operators. The string $c$ can end in the boundary because it can be decomposed in two strings that can end on it. The strings $d$ and $e$ enclose regions and thus produce stabilizer operators. The string $f$ produces an stabilizer element or an undetectable error, depending on wether its label is allowed in the boundary.}
\label{fig:boundary}
\end{figure}
Notice that $M$ should be a subgroup of $L$, because if strings with labels $l,l^\prime\in M$ can end in the boundary then so can strings with label $ll^\prime$ by splitting before reaching the bondary, as string $c$ in Fig. \ref{fig:boundary}. Also, any two labels $l,l^\prime\in M$ must commute in $L^\prime$. In other case, the stabilizer would contain anticommuting elements, which is not possible. This is illustrated by the strings $d$ and $e$ of Fig.~\ref{fig:boundary}, which must produce commuting operators. Finally, $L^\prime$ should be maximal in the sense that for any $l \not\in M$ there exist some $l^\prime\in M$ such that $l$ and $l^\prime$ anticommute in $L^\prime$. In other case, according to the rules stated above, an $l$-string $s$ that surrounds a $M$-hole, like $f$ in Fig. \ref{fig:boundary}, produces an operator $O_s$ that has to belong to $N(\stab)-\stab^\prime$ because it is not a boundary but for which there is no other string $s^\prime$ such that $\sset{O_s,O_s^\prime}=0$. This is a contradiction. It is in fact possible to relax this last maximality condition, but at the cost of getting a boundary between two topological codes, rather than a boundary between a code and the `vacuum'.
Remarkably, boundaries in topological codes are directly related to anyon condensation in the corresponding topologically ordered models \cite{Bombin_condensate}. It will become apparent in section \ref{sec:subsystem_boundaries} that this has important consequences, because only bosons can condense and this forbids certain types of boundaries.
\section{A family of topological subsystem codes}\label{sec:subsystem_topological_codes}
The subsystem codes introduced in this section have their origin in a spin-1/2 quantum model that shows topological order \cite{Bombin_2body}. The Hamiltonian of the model is a sum of 2-local Pauli terms that here will become the gauge generators. The Pauli symmetries of this Hamiltonian where already described in \cite{Bombin_2body}, and thus to some extent the codes where already implicitly considered in that work. Here we explicitly work out all the details from the code perspective. In addition, diverse aspects that are important in practice are explored, such as the possibility of introducing boundaries and the computational power of code deformations.
\subsection{Lattice and gauge group}\label{sec:subsystem_lattice}
The family of codes $\code_\Lambda$ of interest is parametrized by tripartite triangulations $\Lambda$ of closed 2-manifolds, not necessarily orientable. That is, each code $\code_\Lambda$ is obtained from a 2-dimensional lattice $\Lambda$ such that (i) all faces $f\in F$ are triangular and (ii) the set of vertices $V$ can be separated in three disjoint sets, in such a way that no edge $e\in E$ connects two vertices from the same set. Fig.~\ref{fig:lattice}(a) shows an example. Alternatively, $\Lambda$ is the dual of a 2-colex \cite{Bombin_CCto}. Following the notation used in previous works, the three sets of vertices are colored as red, green and blue. The faces of $\Lambda$ will be simply called triangles.
\begin{figure}
\caption{ (a) Part of a 2D lattice $\Lambda$ with triangular faces and 3-colorable vertices. (b) The lattice $\bar \Lambda$ derived from $\Lambda$. It is obtained by separating the triangles of $\Lambda$ and adding one face per edge and vertex of $\Lambda$. Its edges are classified in three types. Solid edges are $Z$-edges, dashed edges are $Y$-edges and dotted edges are $X$-edges. There is one qubit per vertex and the generators of the gauge group are related to edges. They are 2-local operators of the form $XX$, $YY$ or $ZZ$, depending on the edge type.}
\label{fig:lattice}
\end{figure}
The first step in the construction of $\code_\Lambda$ is to derive a new lattice $\bar\Lambda$ from $\Lambda$, as exemplified in Fig.~\ref{fig:lattice}(b). In going from $\Lambda$ to $\bar\Lambda$, the triangles of $\Lambda$ separate from each other giving rise to new faces. In particular, each of the edges and vertices of $\Lambda$ contributes a face to $\bar\Lambda$. The edges of $\Lambda$ are divided into three subsets, $\bar E= \bar E_X\sqcup\bar E_Y\sqcup\bar E_Z$. In Fig.~\ref{fig:lattice}(b), $X$-edges are dotted, $Y$-edges are dashed and $Z$-edges are solid. The $Z$-edges form the triangles of $\bar\Lambda$. Each edge in $\Lambda$ contributes a $X$-edge and a $Y$-edge, in such a way that no two $X$-edges or two $Y$-edges meet. There are thus two ways to choose the sets of $X$ and $Y$ edges.
The definition of $\code_\Lambda$ is now at hand. First, physical qubits correspond to the vertices of $\bar\Lambda$. Second, the gauge group is $\gauge_\Lambda:=\langle i \b 1\rangle\cdot \langle G_e\rangle_{e\in E}$, with generators $G_e$ related to the edges $e$ of $\bar\Lambda$. These take the form $G_e := \sigma_v\sigma_{v^\prime}$ for $e\in \bar E_\sigma$, $\sigma = X,Y,Z$, where $v, v^\prime$ are the vertices connected by $e$. Thus, the generators are 2-local. This is an improvement with respect to previously known topological stabilizer codes, which have generators of weight at least 4.
\subsection{String operators}\label{sec:subsystem_string_operators}
This section describes $N(\gauge_\Lambda)$ and its center $\stab^\prime_\Lambda= \gauge_\Lambda\cap N(\gauge_\Lambda)$ in terms of the string operator framework of section \ref{sec:string_operators}, which is valid for these codes. It turns out in particular that $L^\prime\simeq\pauli_1$, as in surface codes. In section \ref{sec:subsystem_boundaries} it will be apparent, however, that there exist differences between the nature of the strings of $\code_\Lambda$ codes and those in surface codes. These are not captured by $L^\prime$, which indeed does not contain all the information about the corresponding topological order. The details of the statements made in this section can be found in appendix \ref{app:gauge}.
We first seek a graphical representation of $N(\gauge_\Lambda)$. Take any subgraph $\gamma$ of the graph of $\bar\Lambda$, such as the one in Fig.~\ref{fig:normalizer}(b), that has at each of its vertices one of the configurations of Fig.~\ref{fig:normalizer}(a). This graph $\gamma$ produces a Pauli operator $O_\gamma=\bigotimes_v \sigma_v$ with $\sigma_v=\b 1, X,Y,Z$ according to the correspondence of Fig.~\ref{fig:normalizer}(a). Observe that such operators $O_\gamma$ belong to $N(\gauge_\Lambda)$. Up to a phase, the correspondence between the elements of $N(\gauge_\Lambda)$ and graphs is one to one.
\begin{figure}
\caption{ (a) The four possible configurations at a given vertex for allowed subgraphs of $\bar\Lambda$. A different Pauli operator corresponds to each of them. (b) A subgraph $\gamma$ (thick lines) of a lattice $\bar\Lambda$ (thin lines), obtained from a regular triangular lattice $\Lambda$ (lightest lines).}
\label{fig:normalizer}
\end{figure}
These graphs either contain all the edges of a triangle or none of them. Thus, each graph $\gamma$ determines a subset of triangles $T_\gamma$ of the original lattice $\Lambda$. In Figs.~\ref{fig:normalizer}-\ref{fig:plaquette_ops} this subset appears shaded. Notice that the number of triangles of $T_\gamma$ meeting at each vertex is even. In fact, any subset of triangles that meets this property can be realized as $T_\gamma$ for some $\gamma$.
String operators are obtained from string-like graphs such as the one in Fig.~\ref{fig:string_ops}. Notice in the figure how triangles can be paired in a specific way. These pairs of triangles connect always vertices of the same color from the original lattice $\Lambda$. These allows to classify strings accordingly with labels $\mathrm{r}$, $\mathrm{g}$, $\mathrm{b}$. It is a simple exercise to check that crossing strings operators commute if they have the same color and anticommute in other case, in accordance with $L^\prime$. String-nets can be formed by introducing branching points where three strings of different color meet.
\begin{figure}
\caption{ A string operator as a subgraph $\gamma$ of $\bar\Lambda$, displayed in thick lines. Its triangles come in pairs, each of them connecting vertices of the same color in $\Lambda$.}
\label{fig:string_ops}
\end{figure}
\begin{figure}
\caption{ Two examples of string operators $S_v^c$ related to vertices $v$ in $\Lambda$. The string $b$ shares color with the vertex that encloses, whereas for $a$ the two colors differ producing a more involved operator.}
\label{fig:plaquette_ops}
\end{figure}
The group $\stab^\prime_\Lambda$ is generated by small string operators related to vertices $v$ of the original lattice $\Lambda$. In particular, let us set $S_v^c=O_\gamma$, $c=\mathrm{r}, \mathrm{g}, \mathrm{b}$, with $\gamma$ the $c$-colored string going around $v$, as shown in Fig.~\ref{fig:plaquette_ops}. Then $\stab'_\Lambda=\langle i\b 1 \rangle\cdot\langle S_v^c \rangle_{v,c}$. These generators are only subject to the relations \begin{equation}\label{constraints_stabilizer} \prod_c S_v^c \propto \b 1,\qquad \prod_v S_v^c \propto \b 1, \end{equation}
where the first product runs over the three colors and the second over the vertices of $\Lambda$. As a consequence, the rank of $\stab_\Lambda$ is $s=2|V|-2$. Since the number of encoded qubits is $k=2-\chi$, it follows that the number of gauge qubits is $r=n-k-s= |3F|-2|V|+\chi=2|F|-\chi$, showing that gauge qubits see the global structure of the manifold.
What can string operators say about the code distance $d$? Given an operator $O_\gamma\in N(\gauge)$, consider the subset $E^\prime$ of the edges of $\gamma$ with elements all the $X$- and $Y$-edges of $\gamma$ and one of the three $Z$-edges that correspond to each triangle in $T_\gamma$. Then $G=\prod_{e \in L} G_e\in\gauge_\Lambda$ and it is easy to check that $|O_\gamma G|=|T_\gamma|$. Therefore $d\leq d_T$, with $d_T$ the minimal length, in terms of the number of triangles, among nontrivial closed strings. A lower bound for $d$ is given in the next section.
\subsection{Homology of errors}
This section offers a homological description of error correction for $\code_\Lambda$. The main idea is that the error syndrome can be identified with the boundary of errors, considered as paths on the surface. Then error correction succeeds if this paths can be guessed up to homology. It is worth noting that the notation and results in this section will not be used again.
To fix notation, we recall first some basic notions. Let $\Delta$ denote the additive group of $\mathbf{Z}_2$ 1-chains in $\Lambda$. Its elements are sets of edges $\delta\subset E$ and addition is given by $\delta+\delta^\prime= (\delta\cup \delta^\prime) - (\delta\cap \delta^\prime)$. The boundary $\partial \delta$ of $\delta\in \Delta$ is the set of vertices in which an odd number of edges from $\delta$ meet. The elements $\delta\in \Delta$ with $\partial \delta = 0$ are called cycles and form a subgroup $Z\subset \Delta$. Boundaries form a subgroup $B\subset Z$, generated by elements of the form $\delta=\sset {e_1,e_2,e_3}$ with $e_i$ the three edges of a given triangle. The first $\mathbf{Z}_2$ homology group of $\Lambda$ is $H_1:= Z/B\simeq \mathbf{Z}_2^h$ with $h=2-\chi$ the number of independent nontrivial cycles of the closed surface formed by $\Lambda$. Two chains $\delta,\delta^\prime \in \Delta$ are said to be equivalent up to homology, $\delta\sim \delta^\prime$, if $\delta+\delta^\prime\in B$.
Consider a morphism $\funcion {f_\mathrm{r}} {\pauli_n}{\Delta}$ defined by $f_\mathrm{r} (i)= \emptyset$ and the following action on single qubit operators $X_{\bar v}$, $Y_{\bar v}$, where $\bar v\in \bar V$. $X_{\bar v}$ anticommutes exactly with two operators of the form $S_v^\mathrm{r}$, $v\in V$. The corresponding two vertices are connected by an edge $e\in E$, and $f_\mathrm{r} (X_{\bar v}) = \sset e$. $f_\mathrm{r} (Y_{\bar v})$ is defined analogously. It is easy to check that $f_\mathrm{r}[\gauge_\Lambda] = B$ and that for any $O\in \pauli_n$ the set $\partial f_\mathrm{r}(O)$ contains those vertices $v\in V$ such that $\sset{O,S_v^\mathrm{r}}=0$. Moreover, if $\gamma$ is a string then $f_\mathrm{r}(O_\gamma)\in Z$ and if $\gamma$ is red $f_\mathrm{r}(O_\gamma)\in B$. Indeed, if $\sset{\gamma_i}$ is the set of red strings, then $f_\mathrm{r}$ gives an isomorphism $N(\stab_\Lambda)/(\gauge_\Lambda\cdot\langle O_{\gamma_i}\rangle_i) \simeq H_1$.
Consider in addition an analogous morphism $f_\mathrm{b}$ with blue color playing the same role as red in $f_\mathrm{r}$. Then for any $O\in \pauli_n$ we have $O\in \gauge_\Lambda$ if and only if $f_c(O)\in B$ for $c=\mathrm{r}, \mathrm{b}$. Similarly, $O\in N(\stab_\Lambda)$ if and only if $f_c(O)\in Z$ for $c=\mathrm{r}, \mathrm{b}$. This shows that error correction will succeed as long as errors can be guessed up to homology. In detail, suppose that the code suffers a Pauli error $O$. The error syndrome can be expressed in terms of the two sets $\partial f_\mathrm{r}(O), \partial f_\mathrm{b}(O)\subset V$. Suppose that an attempt is made to correct the errors by applying some $O^\prime \in \pauli_n$ such that $\partial f_c(O)=\partial f_c(O^\prime)$ for $c=\mathrm{r}, \mathrm{b}$. Then error correction succeeds if and only if $O^\prime O\in \gauge$, that is, if and only if $f_c(O)\sim f_c(O^\prime)$ for $c=\mathrm{r}, \mathrm{b}$.
Although error correction can be expressed in this homological terms, this is really not the most natural thing to do, basically because it involves an arbitrary choice of two of the three available colors. In this regard, notice that not any set of edges $\delta\in \Delta$ can be obtained from an operator $O\in \pauli_n$ as $\delta = f_\mathrm{r}(O)$, and that the cardinalities of $f_\mathrm{r}(O)$ and $f_\mathrm{b}(O)$ by no means are enough to compute $|O|$. This makes unfeasible a direct translation of the ideas used in \cite{Dennis_TQM} for error correction in surface codes.
In order to give a lower bound for the distance $d$ of the code, the definition of the mappings $f_c$, $c=\mathrm{r},\mathrm{b}$ must be modified. We set $f_c^\prime(\bigotimes_{\bar v} \sigma_{\bar v}):=\sum_{\bar v} f_c^\prime(\sigma_{\bar v})$, where $\sigma_{\bar v} =\b 1_{\bar v}, X_{\bar v},Y_{\bar v}, Z_{\bar v}$, and we fix $f_c^\prime(\b 1_{\bar v}):=\emptyset$, $f_c^\prime(X_{\bar v}):=f_c(X_{\bar v})$, $f_c^\prime(Y_{\bar v}):=f_c(Y_{\bar v})$ and $f_c^\prime(Z_{\bar v})$ is defined in analogy with $f_c(X_{\bar v})$. The new mappings $f_c^\prime$ are not group morphisms, but they do keep the good properties of the $f_c$ mappings listed above. And they satisfy $|O|\geq |f_c^\prime(O)|$, which immediately leads to the bound $d\geq d_L$ with $d_L$ the minimal length, in terms of the number of edges, among nontrivial closed loops in $\Lambda$.
\subsection{Syndrome extraction}
As indicated in section \ref{sec:topological_memory}, in a topological quantum memory one has to keep track of errors by performing round after round of syndrome extraction. This raises the question of how fast and simply the stabilizer generators of a code $\code_\Lambda$ can be measured. The faster the measurements the less errors will be produced in the meanwhile, and the simpler they are the less faulty gates they will involve. Of course, what fast and simple really mean will depend on particular implementations, that is, in the basic operations at our disposal.
\begin{figure}
\caption{ The proposed ordering for the measurements of the edge operators. It does not depend on the particular geometry of the lattice $\Lambda$ because it is dictated by the coloring of its vertices.}
\label{fig:measurements}
\end{figure}
To keep the discussion general, take gauge generator measurements to be the basic components of the syndrome extraction. At each time step the measurement of any subset of generators $\sset{G_i}$ is allowed as long as each physical qubit only appears in one of the $G_i$. Then, in any code $\code_\Lambda$ it is possible to cyclically measure all the stabilizer generators by performing six rounds of measurements. The time step at which each generator is to be measured is indicated in Fig.~\ref{fig:measurements}. Notice that $Z$-edges are measured at even times and $X$- and $Y$-edges at odd times. From the time steps 1-3 the eigenvalue for operators $S_v^c$ at blue vertices are obtained, from the steps 3-5 those for red vertices, and from the steps 5, 6 and 1 (this last one in the subsequent cycle) those for green vertices. It is not clear whether this number of time steps is optimal, since in principle 4 or 5 could be enough. As a comparison, the number of steps needed for Bacon-Shor codes is 4. In this sense the 6 steps are not bad, taking into account that the codes $\code_\Lambda$ do not benefit from the the separation of gauge and stabilizer generators into $X$-type and $Z$-type, as Bacon-Shor codes do.
\subsection{The problem of boundaries}\label{sec:subsystem_boundaries}
This section shows why it is not possible to introduce boundaries with the properties discussed in section \ref{sec:boundaries}. This has important practical consequences, since there is no other known way to introduce a nontrivial topology in a completely planar code. Notice however that we can always flatten a manifold to get a `planar' code, at the price of doubling the density of physical qubits in the surface. Also, the absence of boundaries makes less practical the use of code deformations, although they are still possible, as shown in section \ref{sec:subsystem_deformation}. In any case, this leaves open the question of whether other kind of interesting boundaries can be introduced.
\begin{figure}
\caption{A hypothetical geometry for a $\code_\Lambda$ code with three holes of different colors. According to the properties of boundaries, the string operators $a$ and $b$ are equivalent up to stabilizer elements. This is a contradiction because due to the way they cross they anticommute.}
\label{fig:no_boundaries}
\end{figure}
The existence of boundaries leads to the following contradiction. According to the properties listed in section \ref{sec:boundaries}, there are three potential kinds of boundaries, one per color. Each of them only allows strings of its color to end on it. Clearly either all the boundaries can be constructed or none of them. Thus, suppose that the three of them are allowed and consider a geometry as the one in Fig.~\ref{fig:no_boundaries}, with three holes, one of each color. Take a string-net $\gamma$ that connects the three holes, as in the figure, and deform it to another string-net $\gamma^\prime$. It follows from the properties of string operators and boundaries that $O_\gamma O_{\gamma^\prime}\in \stab^\prime$, but also that $\sset{O_\gamma,O_{\gamma^\prime}}=0$ since they cross at a single point where they have different color. This is not possible.
In section \ref{sec:subsystem_string_operators} it was noted that the string label group $L^\prime$ is the same in surface codes and the subsystem codes. Since according to section \ref{sec:boundaries} the set of allowed boundaries is dictated by $L^\prime$, it could be expected that surface codes would neither have boundaries. However, this is not the case: two kind of boundaries can be constructed in surface codes \cite{Dennis_TQM}. The point is that there is a key difference between the two families of codes: in surface codes the three types of strings are not equivalent in any sense, so that the previous reasoning is not valid.
At a deeper level, this difference between the codes has its origin in the difference between the corresponding topological orders. Indeed, $L^\prime$ does not encode all the information about the properties of anyons. In surface codes two of the quasiparticle types are bosons, and the third a fermion \cite{Kitaev_nonAbelian}, whereas in the subsystem codes the three are fermions \cite{Bombin_2body}. The connection between anyon condensation and boundaries is thus crucial: nice boundaries cannot be introduced in these topological subsystem codes because all the string operators are related to fermions, which cannot condense.
\subsection{Code deformation}\label{sec:subsystem_deformation}
This section explores the potential of code deformations in the topological subsystem codes $\code_\Lambda$. We show how initializations and measurements of individual logical qubits in the $X$ and $Z$ basis are possible through certain topology-changing processes on the manifold. And also, that CNot and Hadamard gates can be in principle implemented through continuous deformations of the manifold, but not in a practical way.
To begin with, a manifold and a set of logical operators $\hat X_1, \hat Z_1, \cdots \hat X_k, \hat Z_k$ must be selected. We choose a $h$-torus, that is, a sphere with $h$ holes. Codes $\code_\Lambda$ on such a manifold provide $2h$ logical qubits, but only $h$ of them will be used, with the choice of logical operators indicated in Fig.~\ref{fig:deformation_A}(a). The rest of logical qubits are considered gauge qubits.
\begin{figure}
\caption{ (a) A sphere with $h$ holes can encode $2h$ qubits, but we choose to encode just $h$. The logical $\hat X_i, \hat Z_i$ operators correspond to the strings in the figure. Red strings give $X$'s and blue strings $Z$'s. (b) When the topology of the surface changes as indicated here two qubits are introduced in the code. They are initialized in a fixed way. In particular, the string operator in the figure is a boundary before the deformation takes place and thus has a fixed value. This value is not changed by the deformation because it occurs in a different part of the code.}
\label{fig:deformation_A}
\end{figure}
When a new handle is introduced, a logical qubit is created and initialized in a definite way, see Fig.~\ref{fig:deformation_A}(b). In the figure the two surfaces are supposed to be already connected, so that a handle is really created. There are two ways to introduce a new handle in a surface such as that of Fig.~\ref{fig:deformation_A}(a), depending on whether the process of Fig.~\ref{fig:deformation_A}(b) occurs inside' or `outside' the surface. In the former case the new qubit is initialized in a $\hat Z$ eigenstate and in the latter in a $\hat X$ eigenstate. Whether the initialization occurs in the $Z$ or $X$ basis depends on which of the two string operators of the new qubit was a boundary initially. This operator has its eigenvalue fixed before the deformation occurs, and during the process it is topologically protected at all times\cite{Bombin_deformation}. The particular sign of the eigenstate depends on the arbitrary sign choices for the logical operators and $\stab_\Lambda$. If the initialization process is reverted it yields a measurement in the corresponding basis\cite{Bombin_deformation}.
It is always possible to detach a qubit, a torus, from the rest of the code. This does not involves any measurement, because the strings running along the cutting line are boundaries \cite{Bombin_deformation}. Similarly, there is no problem in attaching a torus to the code to add a logical qubit. But once a logical qubit is isolated, it can undergo code deformations independently. Consider a mapping that exchanges the two principal cycles of the torus and shifts the lattice a bit if necessary to adjust the color correspondence, for example by rotating the torus. Such a mapping can exchange $\hat X$ and $\hat Z$ operators, which amounts to a Hadamard gate. There exist an important drawback, though. This deformation cannot be realized in 3D without producing self-intersections of the surface. Still, it is conceptually interesting that the Hadamard gate can be obtained from purely geometric code deformations because this is not possible in surface or color codes, where $X$ and $Z$ type operators correspond to different type of string operators and a transversal Hadamard gate must be added to the picture\cite{Dennis_TQM}. Because color is just a matter of location in the lattice, strings of different colors are equivalent up to lattice translations. This is in essence what makes it possible the geometric implementation of the Hadamard gate.
\begin{figure}
\caption{ The deformation that produces a controlled phase gate. (a)The code before the deformation takes place and a particular string. (b) The deformation moves one of the `holes' in the top part around the other, as indicate by the solid line with an arrow. To recover the original shape, as indicated by the dashed line, the two `tubes' have to overlap unavoidably. (c) After the deformation, the string operator has been mapped to the product of these two string operators.}
\label{fig:deformation_B}
\end{figure}
A controlled phase gate $\b 1 - \frac 1 2 (1-Z_i)(1-Z_j)$ on a pair of logical qubits $i,j$ can be implemented through a `continuous' deformation of the code. The process is indicated in Fig.~\ref{fig:deformation_B}. It follows from the way in which the logical operatorators evolve \cite{Bombin_deformation} that the complete process amount to a controlled phase gate, up to some signs in the final logical operators that depend on the choice of $\stab_\Lambda$. A CNot gate can then be obtained by composing this gate with Hadamard gates. But again, such a code deformation requires to overlap the surface of the code with itself, see Fig.~\ref{fig:deformation_B}(b).
\section{Statistical physics of error correction}
In \cite{Dennis_TQM} an interesting connection between error correction thresholds for surface codes and phase transitions in 2D random bond Ising models was developed. Similar mappings exist also for color codes \cite{Katzgraber_3body}, in this case to 2D random 3-body Ising models. In both cases, the CSS structure of these codes is an important ingredient in the constructions: they are subspace codes with $\stab = \stab_X\stab_Z$ in such a way that $\stab_\sigma$ is generated by products of $\sigma$ operators, $\sigma = X, Z$. To take full profit of this, the noise channel for each qubit must be a composition of a bit-flip channel $\channel_{\text {bf}}(p):=\sset{(1-p)^\frac 1 2 \b 1, p^\frac 1 2 X}$ and a phase flip channel $\channel_{\text{pf} }(p):=\sset{(1-p)^\frac 1 2 \b 1, p^\frac 1 2 Z}$.
There are two main obstacles to construct a similar mapping for the codes $C_\Lambda$. The first is that they are subsystem codes rather than subspace codes. The second, that the gauge group cannot be separated in a $X$ and a $Z$ part. As we show below, both can be overcome.
\subsection{Mapping to a statistical model}
Rather than directly considering the codes $\code_\Lambda$, this section deals with the general mapping from any given stabilizer subsystem code to a suitable classical statistical model. For simplicity, each qubit in the code is supposed to be subject to a depolarizing channel $\channel_{\text{dep}}(p):=\sset{(1-p)^\frac 1 2 \b 1, (p/3)^\frac 1 2 X, (p/3)^\frac 1 2 Y, (p/3)^\frac 1 2 Z}$, with $p$ the error probability, but more general channels are possible within the same framework.
To build the classical Hamiltonian model, the first step is the choice of a set of generators $\sset{G_i}_{i=1}^l$ of $\gauge/\langle i \b 1\rangle$. These generators can be captured in a collection of numbers $g^\sigma_{ij}=0,1$ defined by \begin{equation}\label{g_ij} G_i\sigma_j=(-1)^{g_{ij}^\sigma}\sigma_jG_i, \end{equation} with $\sigma = X,Y,Z$, $i=1,\dots, l$, $j=1,\dots, n$. Attach a classical Ising spin $s_i=\pm 1$ to each of the generators $G_i$. The family of Hamiltonians of interest is \begin{equation}\label{Hamiltonian} H_{\tau} (s) := - J \sum_{\sigma=X,Y,Z} \,\sum_{j=1}^{n}\, \,\tau_j^\sigma \, \prod_{i=1}^l \,s_i^{g_{ij}^\sigma}, \end{equation} with parameters $\tau_j^\sigma= \pm 1$ such that $\tau_j^X\tau_j^Y\tau_j^Z=1$. The coupling $J>0$ is introduced to follow conventions. Notice that codes with local gauge generators give rise to local Hamiltonian models. Since $g_{ij}^X+g_{ij}^Y+ g_{ij}^Z=0 \mod 2$ the Hamiltonian \eqref{Hamiltonian} can be rewritten as \begin{equation}\label{Hamiltonian_alt} H_{\tau} (s) = n- \sum_{j} (1+ \tau_j^X \prod_i s_i^{g_{ij}^X})(1+ \tau_j^Y \prod_i s_i^{g_{ij}^Y}). \end{equation} The partition function for these Hamiltonians is \begin{equation}\label{partition} Z(K,\tau) = \sum_{s} e^{-\beta H_{\tau}(s)} \end{equation} with $K:=\beta J$ and $\beta$ the inverse temperature.
The goal is to express the class probabilities $p(\bar E)$, $E\in\pauli_n$, in terms of the partition function \eqref{partition} for a suitable $\tau$. Let $\tau=\tau_E$ be such that \begin{equation}\label{tau_E} E\propto \bigotimes_j X^{\frac {1-\tau_j^Y}2} Y^{\frac{1-\tau_j^X}2}. \end{equation} Similarly, for each $G\in \gauge$ choose any $s=s_G$ such that
\begin{equation}\label{mu_G} G\propto G_1^{\frac {1-s_1} 2}\cdots G_l^{\frac {1-s_l} 2}. \end{equation} We write $s^{\prime\prime}=s^\prime s$ if $s^{\prime\prime}_j =s^\prime_j s_j$. Then $s_Gs_{G^\prime}=s_{GG^\prime}$ and for any spin configuration $s$, $E\in\pauli_n$ and $G\in \gauge$ it can be checked that \begin{equation}\label{A_property} H_{\tau_{EG}}(s)=H_{\tau_E}(s_G s), \end{equation}
In the depolarizing channel the probability for a Pauli error $E$ is $p(E)=(p/3)^{|E|}(1-p)^{n-|E|}$. It may be written as \begin{equation}\label{A_probability} p(E)= c_p^{-n} e^{-\beta_p H_{\tau_E}(s_\b 1)}, \end{equation} where $c_p:= e^{3K_p} +3e^{-K_p}$ and $\beta_p:=K_p/ J$ with \begin{equation}\label{Nishimori} 3e^{-4K_p} := \frac p {1-p}. \end{equation} The desired connection follows from \eqref{A_property} and \eqref{A_probability}, which give \begin{equation}\label{probability_partition} p(\bar E)= \sum_{G\in \gauge} p(EG)= \frac 1 {2^{w} c_p^{n}}Z(K_p,\tau_E), \end{equation} where $w$ is the number of redundant generators of $\gauge$, that is, $w=l-l^\prime$ with $l^\prime$ the rank of $\gauge/\langle i\b 1\rangle$.
\subsection{CSS-like codes}\label{sec:CSS_stat}
To connect the results of the previous section with the works \cite{Dennis_TQM, Wang_topoStat, Katzgraber_3body} CSS codes must be considered. These are codes with $\gauge = \langle i\b 1\rangle \gauge_X\gauge_Z$ for some $\gauge_\sigma$ generated by products of $\sigma$ operators, $\sigma = X, Z$. And, instead of a depolarizing channel, the noisy channel must take the form $\channel = \channel_{\text{bf} }(p)\circ\channel_{\text{pf} }(p^\prime)$. This allows to treat $X$ and $Z$ errors independently \cite{Dennis_TQM}. Here we consider the case of bit flip errors, phase flip errors are analogous.
The construction is similar to the one in the previous section. It starts with the choice of generators $\gauge_{X}=\langle G_i\rangle_{i=1}^l$. The relevant Hamiltonians read \begin{equation}\label{Hamiltonian_p} H_\tau^\prime := - J \sum_{j=1}^{n}\, \tau_j \, \prod_{i=1}^l \,s_i^{g_{ij}}, \end{equation} where $\tau_j:=\tau_j^Z$ and $g_{ij}:=g_{ij}^Z$. The probability of an error $E$ that is a product of $X$ operators is \begin{equation}\label{probability_partition_p} p(\tilde E):= \sum_{G\in \gauge_X} p(EG)= \frac 1 {2^{w} (2\cosh K_p^\prime)^{n}}Z(K_p^\prime,\tau_E), \end{equation} where $w$ is the number of redundant generators of $\gauge_X$ and \begin{equation}\label{Nishimori_p} e^{-2K_p^\prime} := \frac p {1-p} \end{equation} defines the Nishimori temperature \cite{Nishimori_statInfo}.
Bacon-Shor codes provides an example of gauge CSS-like codes. With the above procedure, they yield models that amounts to several copies of the 1D Ising model.
\subsection{Symmetries}
Interestingly, the redundancy of the generators of $\gauge$ is directly connected to the symmetries of the Hamiltonians \eqref{Hamiltonian}. Suppose that the generators are subject to a constraint of the form \begin{equation}\label{constraint_generators} \prod_{i\in I} G_i \propto \b 1 \end{equation} for some set of indices $I$. Then \eqref{A_property} gives \begin{equation}\label{symmetry} H_\tau (s)= H_\tau (s\prod_{i\in I} s_{G_i}). \end{equation} In other words, making the most natural choice for the $s_{G_i}$ it follows that the Hamiltonian is invariant under the transformation \begin{equation}\label{symmetry_transformation} s_i\longrightarrow s_i^\prime= \begin{cases} -s_i, &i\in I, \\ s_i, &i \not\in I. \end{cases} \end{equation} Thus, global constraints lead to global symmetries and local constraints to local symmetries.
As a particular example, consider surface codes, which are mapped to Ising models \cite{Dennis_TQM}. In these codes the product of all $X$-type stabilizers equals the identity, producing a symmetry that is simply the global $\mathbf{Z}_2$ symmetry of the Ising model.
\subsection{Error correction and free energy}
We now put equation \eqref{probability_partition} to use in the error correction framework of section \ref{sec:error_correction}. Recall that, after the syndrome has been measured, one has to find the most probable class of errors among several candidates $\bar E_i := \bar E\bar D_i$. This amounts to compare the probabilities $p(\bar E_i)$ or, alternatively, the quantities $Z(K_p,\tau_{E_i})$. And to do this, it is enough to know the free energy differences \cite{Dennis_TQM} \begin{equation}\label{free_energy} \Delta_i(K_p, \tau_E):= \beta F (K_p, \tau_{E_i}) - \beta F(K_p,\tau_E), \end{equation} where $F(K, \tau) = - T\log Z(K, \tau)$ is the free energy of a given interaction configuration $\tau$. For example, in the Ising models that appear for 2D surface and color codes these are domain wall free energies.
In practice, the computation of \eqref{free_energy} may be difficult. In this regard, it has been suggested \cite{Dennis_TQM}, in the context of surface codes, that in the absence of glassy behavior the computation of \eqref{free_energy} should be manageable, and in \cite{Wang_topoStat} a possible approach was sketched.
\subsection{Error threshold and phase transition}
In surfaces codes there exists an error probability $p_c$, the error threshold, such that the asymptotic value of the success probability $p_0$, in the limit of large code instances, is one for $p<p_c$ and $1/4^k$ for $p>p_c$ \cite{Wang_topoStat}. This is directly connected to an order-disorder phase transition in a model with random interactions. An analogous transition is observed for the random model that corresponds to color codes \cite{Katzgraber_3body, Ohzeki_CC, Landahl_CC}. It is then natural to expect a similar connection in other topological codes, as we describe next.
Consider a random statistical model with Hamiltonian \eqref{Hamiltonian} in which the parameter $\tau$ is a quenched random variable. That is, $\tau$ is random but not subject to thermal fluctuations. The probability distribution $p(\tau)$ is such that the signs of $\tau_i^{\sigma}$ and $\tau_j^{\sigma^\prime}$ are independent if $i\neq j$. For each $i$, the case $\tau_i^X = \tau_i^Y= 1$ has probability $1-p$ and the other cases have probability $p/3$ each. In other words, if $\tau = \tau_E$ then $p(\tau)=p(E)$ with $p(E)$ given by the depolarizing channel $\channel_{\text{dep}}(p)$.
In thermal equilibrium the model has two parameters, the temperature $T$ and the probability $p$. For the mapping only a particular line in the $p$-$T$ plane is relevant, the Nishimori line \cite{Nishimori_statInfo}, given by the condition $K=K_p$ that has its origin in \eqref{A_probability}. The error correction success probability in \eqref{success_prime} can be written in terms of this statistical model as follows: \begin{equation}\label{success_stat} p_0^\prime=\left[\left ( 1+\sum_{i=2}^{4k} e^{-\Delta_i(K_p, \tau)} \right )^{-1}\right ]_{K_p}, \end{equation} where $[ \cdot ]_{K_p} := \sum_{\tau} p(\tau) \,\cdot$ denotes the average over the quenched variables.
Suppose that the code has a threshold probability $p_c$ below which $p_0^\prime\rightarrow 1$ in the limit of large codes. Then \cite{divergence}, in the random model the average of the free energy difference \eqref{free_energy} diverges with the system size, $[\Delta_i(K,\tau)]_{K_p}\rightarrow \infty$, for $p< p_c$ along the Nishimori line. This is exemplified \cite{Dennis_TQM} by surface codes and the corresponding random 2d Ising models, where $[\Delta_i(K,\tau)]_{K_p}$ is the domain wall free energy. It diverges with the system size below $p=p_c$ and attains some finite limit over the threshold, signaling an order-disorder phase transition at $p_c$. A similar behaviour can be expected for other topological codes. For 2D color codes this was shown in \cite{Katzgraber_3body}.
\subsection{The Hamiltonian model for $\code_\Lambda$ codes}
The above mapping can be immediately applied to the subsystem codes $\code_\Lambda$. Choose as generators of the gauge group the edge operators $O_e$, so that there is an Ising spin $s_e$ at each edge $e$. The Hamiltonian takes the form \begin{equation}\label{Hamiltonian_lambda} H_{\tau}^\Lambda (s) := - J \sum_{j=1}^n \tau_j^X s_{2}s_{3}s_{4} + \tau_j^Y s_1s_3s_4 +\tau_j^Z s_1s_2 , \end{equation} where the sum runs over vertices and for each of them the Ising spins $s_1, s_2, s_3, s_4$ correspond respectively to the $X$, $Y$ and two $Z$ edges meeting at the vertex.
The Hamiltonian \eqref{Hamiltonian_lambda} has a local symmetry at each triangle. In particular, flipping the three Ising spins of the triangle leaves $H_\tau^\Lambda$ invariant. This is so becuase the product of the three edge operators in the triangle equals the identity. There exist also a global $\mathbf{Z}_2\times\mathbf{Z}_2$ symmetry that follows from the global constraints in \eqref{constraints_stabilizer}. The local constraints in \eqref{constraints_stabilizer} do not provide any symmetry as they are trivial in terms of the gauge generators.
\subsection{Faulty measurements}
The mapping considered up to know is only suitable if perfectly accurate quantum computations are allowed in error correction. This section generalizes it to include errors in the measurements of the stabilizer generators.
Following \cite{Dennis_TQM}, take as a goal the `indefinite' preservation of the content of a quantum memory. Time is divided in discrete steps. At each time step, the memory suffers errors and at the same time the stabilizer generators are imperfectly measured. Then if from the history of measurements one can correctly infer the actual history of errors, up to a suitable equivalence, the memory is safe.
The results in \cite{Dennis_TQM, Wang_topoStat} show that for surface codes there exists of a noise threshold below which long time storage is possible for sufficiently large codes. The same behavior can be expected for other topological codes, but the construction of a suitable random statistical model for each code is required first. Here we generalize the construction of \cite{Dennis_TQM} to subsystem codes and depolarizing channels.
\subsubsection{Depolarizing channel}
Consider first the case of a depolarizing channel $\channel_{\text{dep}}(p)$ occurring for each physical qubit between each round of measurements. We adopt the convention that at a given time $t$ first errors occur and then faulty measurements are performed.
Recall that in the mapping of error correction to a statistical model errors were mapped to interactions through the $\tau_j^\sigma$, see \eqref{tau_E}. The new elements here are time and faulty measurements. Since errors can occur at different time steps $t$, a time label must be added to the $\tau_j^\sigma$'s to get the collection of signs $\tau=(\tau_{jt}^\sigma)$, subject as before to the constraints $\tau_{jt}^X\tau_{jt}^Y\tau_{jt}^Z=1$. To represent errors in the measurements of stabilizers, first a set $\sset{S_k}_{k=1}^m$ of generators of $\stab$ to be measured at each time step $t$ must be chosen. Attach to them a collection of signs $\kappa_{kt} = \pm 1$. The correct (wrong) measurement of the $i$-generator at time $t$ corresponds to $\kappa_{kt}=1$ ($\kappa_{kt}=-1$). In the statistical model the $\tau$ and $\kappa$ are quenched variables. $\tau$ follows the same distribution as before, dependent on the probability $p$, and each $\kappa_{kt}$ is independent an takes value $-1$ with probability $q$. For this to make sense under the mapping, errors in the measurements must occur independently with a fixed probability $q$. This will not be true in most settings. Still, it is a useful assumption because knowing the correlations between errors can only improve error correction. In analogy with the $g_{ij}^\sigma$ defined above, the stabilizer generators are captured in a collection of numbers $h^\sigma_{kj}=0,1$ defined by \begin{equation}\label{g_ij} S_k\sigma_j=(-1)^{h_{kj}^\sigma}\sigma_jS_k, \end{equation} with $\sigma = X,Y,Z$, $k=1,\dots, m$, $j=1,\dots, n$.
Recall also that in the original mapping gauge generators $G_i$ were mapped to Ising spins $s_i$. The reason for this was that gauge generators play the role of basic equivalences between errors. Now for errors that occur at the same time $t$ these kind of equivalence happens again, represented by spins $s_{it}$. But in addition there is an equivalence between errors that involves errors at different time and measurement errors. If at times $t$ an $t+1$ a given error occurs and the measurements at time $t$ of the stabilizers that would detect the error fail, then these errors altogether go unnoticed but produce no harm. Thus, two collections of errors that differ only by such an event should be considered equivalent. Therefore, Ising spins that represent this equivalence are necessary. This can be achieved by attaching two Ising spins $s^X_{jt}, s^Y_{jt}$ to the $t$-th time step and $j$-th qubit. The Hamiltonians are \begin{align}\label{Hamiltonian_faulty} H_{\tau, \kappa} (s) := &- J \sum_\sigma \sum_j \sum_t \,\tau_{jt}^\sigma\, s_{j(t-1)}^\sigma \, s_{jt}^\sigma\,\prod_i \,s_{it}^{g_{ij}^\sigma} \,\,+\nonumber\\ &- K \sum_k \sum_t \,\kappa_{kt}\, \prod_j \prod_\sigma \,(s_{jt}^\sigma)^{h^\sigma_{kj}}, \end{align} where $s^Z_{jt}:=s^X_{jt} s^Y_{jt}$ and the range of values of the different indices should be clear from the context. In order to recover the probability of a given set of errors from the partition function the relations \begin{equation}\label{Nishimori_faulty} 3e^{-4\beta J} = \frac p {1-p},\qquad e^{-2\beta K} = \frac q {1-q} \end{equation} must hold.
For each time step $t$, the Hamiltonians \eqref{Hamiltonian_faulty} keep the symmetries \eqref{symmetry_transformation}. In addition, there is a symmetry for each gauge generator $G_{i^\prime}$ and time $t^\prime$. Namely, \begin{align}\label{symmetry_transformation_faulty} s_{jt}^\sigma&\longrightarrow s_{jt}^{\prime\sigma}=\begin{cases} (-1)^{g_{i^\prime j}^\sigma} s_{jt}^\sigma, &t=t^\prime,\\ s_{jt}^\sigma, &t\neq t^\prime,\end{cases}\nonumber\\ s_{it}&\longrightarrow s_{it}^\prime= \begin{cases} -s_{it}, &i=i^\prime, \,t=t^\prime,t^\prime+1, \\ s_{it}, &\text{otherwise.} \end{cases} \end{align} Therefore, local gauge generators give rise to a (random) gauge model.
\subsubsection{Bit flip channel}
Finally, consider the simpler case of a bit flip channel $\channel_{\text{bf}}(p)$ in a CSS-like code. As noted above, the case of a phase flip channel is analogous and if both channels happen consecutively they can be treated independently.
The construction is an extension of the one in section \ref{sec:CSS_stat}. The $\tau_{j}$'s and $s_i$ are respectively replaced by the signs $\tau_{jt}$ and the Ising spins $s_{it}$. Given a choice of generators $\stab_{Z}=\langle S_k\rangle$ to be measured at each time step, there is a corresponding collection of signs $\kappa_{kj}$. The generators take the form $S_k=\pm \bigotimes_j Z_j^{h_{kj}}$ for some $h_{kj}=0,1$. There is also an Ising spin $\hat s_{jt}$ for each physical qubit $j$ and time step $t$. The Hamiltonians read \begin{align}\label{Hamiltonian_faulty_p} H_{\tau, \kappa}^\prime (s) := &- J \sum_j \sum_t \,\tau_{jt}\, \hat s_{j(t-1)} \, \hat s_{jt}\,\prod_i \,s_{it}^{g_{ij}} \,\,+\nonumber\\ &- K \sum_k \sum_t \,\kappa_{kt}\, \prod_j \,\hat s_{jt}^{h_{kj}}. \end{align} Instead of \eqref{Nishimori_faulty}, the right conditions are now \begin{equation}\label{Nishimori_faulty_p} e^{-2\beta J} = \frac p {1-p},\qquad e^{-2\beta K} = \frac q {1-q}. \end{equation} The analog of \eqref{symmetry_transformation_faulty} is \begin{align}\label{symmetry_transformation_faulty_p} \hat s_{jt}&\longrightarrow \hat s_{jt}^\prime=\begin{cases} (-1)^{g_{i^\prime j}} s_{jt}, &t=t^\prime,\\ \hat s_{jt}, &t\neq t^\prime,\end{cases}\nonumber\\ s_{it}&\longrightarrow s_{it}^\prime= \begin{cases} -s_{it}, &i=i^\prime, \,t=t^\prime,t^\prime+1, \\ s_{it}, &\text{otherwise.} \end{cases} \end{align}
\section{Conclusions}
Topological codes are intrinsically local, and gauge or subsystem codes can have interesting locality properties. In this paper we have introduced a family of topological subsystem codes, thus putting together the two concepts. The gauge group of the code is generated by 2-local operators, which compares well with surface or color codes that have at least 4-local and 6-local generators, respectively. In particular, the measurement of these 2-local operators is enough to recover the error syndrome.
We have argued that these codes do not allow the introduction of boundaries with nice properties, which motivates further research. There are probably interesting topological codes still to be discovered. One could look for example for subsystem codes with nice boundaries or with interesting transversality properties as those found in color codes.
We have also explored a general connection between error correction in subsystem codes and statistical physics. The connection is specially meaningful in the case of topological codes, where the error threshold maps to a phase transition in the corresponding statistical model. There is a lot of work to do in this direction. For example, the computation, probably numerically, of the error threshold of the topological subsystem codes presented here.
\section{Structure of $N(\gauge)$}\label{app:gauge}
This appendix is a complement to section \ref{sec:subsystem_string_operators}, and uses the same notation.
A color code $\code_\Lambda^\mathrm{c}$ \cite{Bombin_CC2d} can be obtained from a lattice $\Lambda$ with the properties enumerated in section \ref{sec:subsystem_lattice}. The construction is the following. First, there is one qubit per triangle, so that the relevant Pauli group is $\pauli_{|F|}$. Given a collection of triangles $T=\sset{\tau_i}$, set $X_T:=\bigotimes_i X_{\tau_i}$, $Z_T:=\bigotimes_i Z_{\tau_i}$. If each vertex $v\in V$ is identified with the set of triangles meeting at $v$, the stabilizer for the color code is $\stab_\Lambda^\mathrm{c} := \stab_X \stab_Z$ with $\stab_X := \langle X_v \rangle_v$, $\stab_Z := \langle Z_v \rangle_v$. Let $\sset {T_i}$ be the collection of those sets of triangles that have an even number of triangles meeting at each vertex. Then $N(\stab_\Lambda^\mathrm{c})= \langle i\b 1\rangle N_XN_Z$ with $N_X:=\langle X_{T_i}\rangle_i$, $N_Z:=\langle Z_{T_i}\rangle_i$.
Next, consider the morphism $\funcion f {N(\gauge_\Lambda)} {N_X}$ such that $f(O)=X_{T_\gamma}$ for any subgraph $\gamma$ of $\bar\Lambda$ and $O\in N(\gauge_\Lambda)$ such that $O\propto O_\gamma$. The kernel of $f$ is formed by those operators that only involve $Z$-s, not $X$-s or $Y$-s. That is, $\ker f = \langle i\b 1 \rangle \langle O_v^{c_v}\rangle_v\subset \mathcal S^\prime$, where $c_v$ is the color of $v$ in $\Lambda$. Since $f(O_v^c)=X_v$ for $c\neq c_v$ , $f[\stab^\prime_\Lambda]=\stab_X$ and thus $\stab^\prime_\Lambda/\ker f \simeq \stab_X$. This implies that there are no other constraints for the generators of $\stab^\prime_\Lambda$ apart from the ones in \eqref{constraints_stabilizer}, because exactly two of the generators $\sset{X_v}$ of $\stab_X$ are unnecessary \cite{Bombin_CC2d}. Finally, it is easy to check that for any string operator $X_T\in N_X$, as described in \cite{Bombin_CC2d}, there exists a string-like graph $\gamma$ such that $f(O_\gamma)=X_T$, so that $f$ is onto and $N(\gauge_\Lambda)/\stab^\prime_\Lambda\simeq N_X/\stab_X$. Then the properties of the string operators in $\code_\Lambda$ are consequences of those for string operators in $\code_\Lambda^\mathrm{c}$. This is in particular true regarding the generating set and the composition rules, but not for commutation rules, which have to be worked out separately.
\end{document} | arXiv |
vibrational transition exist in
Have questions or comments? e + 2B, ~ ν 1-2χ. If we represent the population of the Jth upper level as NJ and the population of the lower state as N0, we can find the population of the upper state relative to the lower state using the Boltzmann distribution: \[\dfrac{N_J}{N_0}={(2J+1)e}^\left(-\dfrac{E_r}{kT}\right)\], (2J+1) gives the degeneracy of the Jth upper level arising from the allowed values of MJ (+J to –J). trailer << /Size 375 /Info 356 0 R /Root 359 0 R /Prev 323193 /ID[<85f839941c7d450248e4d71a2f72515f>] >> startxref 0 %%EOF 359 0 obj << /Type /Catalog /Pages 354 0 R /Metadata 357 0 R /OpenAction [ 361 0 R /Fit ] /PageMode /UseNone /PageLayout /SinglePage /PageLabels 352 0 R /StructTreeRoot 360 0 R /PieceInfo << /MarkedPDF << /LastModified (D:20060306144850)>> >> /LastModified (D:20060306144850) /MarkInfo << /Marked true /LetterspaceFlags 0 >> >> endobj 360 0 obj << /Type /StructTreeRoot /ParentTree 117 0 R /ParentTreeNextKey 19 /K [ 123 0 R 137 0 R 154 0 R 164 0 R 178 0 R 196 0 R 206 0 R 221 0 R 239 0 R 255 0 R 265 0 R 271 0 R 281 0 R 289 0 R 294 0 R 305 0 R 319 0 R 333 0 R 343 0 R 349 0 R ] /RoleMap 350 0 R >> endobj 373 0 obj << /S 586 /L 729 /C 745 /Filter /FlateDecode /Length 374 0 R >> stream The tin-selenide and tin-sulfide classes of materials undergo multiple structural transitions under high pressure leading to periodic lattice distortions, superconductivity, and topologically non-trivial phases, yet a number of controversies exist regarding the structural transformations in these systems. Similarly, as temperature increases, the population distribution will shift towards higher values of J. A transition state is a first order saddle point on a potential energy surface (PES). Rotational transitions. The combined excitation is known as a vibronic transition , giving vibrational fine structure to electronic transitions , particularly for molecules in the gas state . o Vibrational transitions accompanied by rotational transitions. Forces Driving Phase Transition. One photon dissociates any molecules in the excited vibrational state. o Must adhere to angular momentum selection rules. Vibrational transition of a molecule refers to the movement of the molecule from one vibrational energy level to another. To find the energy of a line of the R-branch: \[\begin{align} \Delta{E} &=h\nu_0 +hB \left [J(J+1)-J^\prime (J^\prime{+1}) \right] \\[4pt] &=h\nu_0 +hB \left[(J+1)(J+2)-J(J+1)\right] \\[4pt] &= h\nu_0 +2hB(J+1) \end{align}\]. ~ ν 1-2χ. The energy of a vibration is quantized in discrete levels and given by, \[E_v=h\nu \left(v+\dfrac{1}{2} \right) \], \[\nu=\dfrac{1}{2\pi}\left(\dfrac{k}{\mu}\right)^\dfrac{1}{2}\], Where k is the force constant and \(\mu\) is the reduced mass of a diatomic molecule with atom masses \(m_1\) and \(m_2\), given by, \[\mu=\dfrac{{m}_1{m}_2}{{m}_1+{m}_2} \label{reduced mass}\], In which \(I\) is the moment of inertia, given by. ... Transitions related to absorption only occur between v = 0 and v = 1. Computing vibrational spectra beyond the harmonic approximation has become an active area of research owing to the improved efficiency of computer techniques [514, 515, 516, 517].To calculate the exact vibrational spectrum within Born-Oppenheimer approximation, one has to solve the nuclear Schrödinger equation completely using numerical … The total nuclear energy of the combined rotation-vibration terms, \(S(v, J)\), can be written as the sum of the vibrational energy and the rotational energy. the rotational quantum number in the ground state is one more than the rotational quantum number in the excited state – R branch (in French, riche or rich). Legal. Then, the transition from v=1 to v=2 can occur. the rotational quantum number in the ground state is the same as the rotational quantum number in the excited state – Q branch (simple, the letter between P and R). The relative intensity of the lines is a function of the rotational populations of the ground states, i.e. What is Vibrational Transition? 0000003181 00000 n 1 Transition states. Missed the LibreFest? 0000004280 00000 n Rotational transitions take place in the far infrared and microwave regions. 0000002428 00000 n These where \(G(v)\) represents the energy of the harmonic oscillator, ignoring anharmonic components and \(S(J)\) represents the energy of a rigid rotor, ignoring centrifugal distortion. the electric-quadrupole term, that give rise to very weak 'forbidden' transi-tions in their rovibrational spectrum. It is important to note in which units one is working since the rotational constant is always represented as \(B\), whether in frequency or wavenumbers. (8.35) that an electric dipole fundamental vibrational transition can occur only if it is associated to a vibrational mode which generates an oscillation of the electric dipole moment. the intensity is proportional to the number of molecules that have made the transition. The Q-branch can be observed in polyatomic molecules and diatomic molecules with electronic angular momentum in the ground electronic state, e.g. • The integrated absorption coefficient is hidden within the transition probability, but is quite a significant component. Generally, rotational isomerization about the carbon-carbon single bond in simple ethane derivatives in room-temperature solution under thermal equilibrium conditions has been too fast to measure. Transition C involves an excited state that is largely displaced from the ground state and thus no vertical transition is possible to this state. 358 0 obj << /Linearized 1 /O 361 /H [ 1318 730 ] /L 330483 /E 31892 /N 20 /T 323204 >> endobj xref 358 17 0000000016 00000 n Fluorophores can exist in a variety of vibrational energy levels. Vibrational transitions and optical phonon transitions take place in the infrared part of the spectrum, at wavelengths of around 1-30 micrometres. \[ S(v,J)=\nu_0 \left(v+\dfrac{1}{2}\right) +BJ(J+1)\]. 0000002469 00000 n Combined vibrational and elastic results are used to derive the mode Grüneisen parameter of each mode, which drops significantly across the transition. Transition B, on the other hand, terminates in the lowest vibrational level of the excited state. When they are satisfied, the transition is said to be an allowed transition, otherwise it is a forbidden transition. When ∆J = 0, i.e. 22la¿Í>ÿ¯—ûö.îr>5vm¶P™Æ@ouÙ)2_T›;}žN‹b9kÑv:²Í jàÃó"6vev…EÞçØ?^"X. 0000002048 00000 n This is the reason that rovibrational spectral lines increase in energy to a maximum as J increases, then decrease to zero as J continues to increase, as seen in Figure \(\PageIndex{2}\). Advanced Concept: Occupations (Peak Intensities). The spectrum we expect, based on the conditions described above, consists of lines equidistant in energy from one another, separated by a value of \(2B\). When ∆J = -1, i.e. To find the energy of a line of the Q-branch: \[ \begin{align} \Delta{E} &=h\nu_0 +hB[J(J+1)-J^\prime(J^\prime+1)] \\[4pt] &=h\nu_0 \end{align}\]. A vibrational transition refers to a transition from the lowest vibrational level within a certain electronic level to another vibrational level in the same electronic level. Since vibrational energy states are on the order of 1000 cm -1, the rotational energy states can be superimposed upon the vibrational energy states. (56)), the vibrational spectrum would contain only one line which is in fact detected experimentally. transitions if the electron could vibrate in all three dimensions. Structural phase transitions in layered two-dimensional (2D) materials are of significant interest owing to their ability to exist in multiple metastable states with distinctive properties. Since vibrational energy states are on the order of 1000 cm-1, the rotational energy states can be superimposed upon the vibrational energy states. Vibrational transition spectra of H2+ in an ultra-strong magnetic field are determined. First, the change in vibrational quantum number from the initial to the final state must be \(\pm 1\) (\(+1\) for absorption and \(-1\) for emission): How ever the situation is simple if the absorption is from the electronic ground state to an excited state, as almost all molecules exist in the lowest vibrational state. We treat the molecule's vibrations as those of a harmonic oscillator (ignoring anharmonicity). [2,3] When ∆J = +1, i.e. These two selection rules mean that the transition ∆J = 0 (i.e. This technique covers the region of the electromagnetic spectrum between the visible (wavelength of 800 nanometres) and the short-wavelength microwave (0.3 millimetre). 0000031220 00000 n 0000031299 00000 n One of these processes, luminescence, is used to advantage in such familiar applications as fluorescent The zero gap is also where we would expect the Q-branch, depicted as the dotted line, if it is allowed. When such transitions emit or absorb photons, the frequency is proportional to the difference in energy levels and can be detected by … 5 In the 3N representation, six of the irreducible representations correspond to translations and rotations of the molecule. The relative intensity of the P- and R-branch lines depends on the thermal distribution of electrons; more specifically, they depend on the population of the lower J state. These are the degenerate vibrational modes spanning the same symmetry species of the translations T x and T y , and the nondegenerate modes spanning the symmetry species of the translation T z . On the other hand, chemical reactions may form molecules in high vibrational levels and emissions from such levels need to be considered. However, the phenomenon of anharmonicity lowers the v=2 energy slightly, making the 1→2 transition However, phase transition in bulk MoS2 by nondestructive electron infusion has not yet been realized. Each line of the branch is labeled R(J) or P(J), where J represents the value of the lower state Figure \(\PageIndex{1}\)). information contact us at [email protected], status page at https://status.libretexts.org. Transition must produce a changing electric dipole moment (IR spectroscopy). A molecule's rotation can be affected by its vibrational transition because there is a change in bond length, so these rotational transitions are expected to occur. Forbidden Vibrational Transitions in Cold ... terms do exist in the multipole expansion of the interaction of molecules with radia-tion, e.g. \(\nu\) is the frequency of the vibration given by: \(\nu_0 \neq 0\) is forbidden and the pure vibrational transition is not observed in most cases. In a typical fluorophore, irradiation with a wide spectrum of wavelengths will generate an entire range of allowed transitions that populate the various vibrational energy levels of … Vibrational excitation can occur in conjunction with electronic excitation in the ultraviolet-visible region. The rotational selection rule gives rise to an R-branch (when ∆J = +1) and a P-branch (when ∆J = -1). From this, vibrational transitions can couple with rotational transitions to give rovibrational spectra. 0000000691 00000 n This interactive tutorial explores the various electronic excited state transitions that occur with fluorescence, phosphorescence, and delayed fluorescence. To find the energy of a line of the P-branch: \[\begin{align} \Delta{E} &=h\nu_0 +hB \left [J(J+1)-J^\prime(J^\prime+1) \right] \\[4pt] &= h\nu_0 +hB \left [J(J-1)-J(J+1) \right] \\[4pt] &= h\nu_0 -2hBJ \end{align}\]. Therefore the transitions which are of considerable As seen in Figure \(\PageIndex{2}\), the lines of the P-branch (represented by purple arrows) and R-branch (represented by red arrows) are separated by specific multiples of \(B\) (i.e, \(2B\)), thus the bond length can be deduced without the need for pure rotational spectroscopy. SISSA Ph.D. Thesis Modelling Structure,phase transition,vibrational spectroscopy of silica at extreme conditions 6 density, different symmetry, and a different medium- and long-range arrangement of the tetrahedral. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Each of the modes of vibration of diatomic molecules in the gas phase also contains closely-spaced (1-10 cm-1 difference) energy states attributable to rotational transitions that accompany the vibrational transitions. 2.1. The specific temperature at which this transition occurs is referred to as T m and varies depending on the specific molecule. This line is called fundamental line. Stishovite (rutile structure) and the CaCl2-like phase of silica are also of considerable and \(S(J)\) represents the energy of a rigid rotor, ignoring centrifugal distortion. This corresponds to a vibrational transition in which the rotational energy of the molecule decreases by one unit of angular momentum ⇒ spectral lines at again, with an intensity distribution reflecting (I) the population of the rotational levels and (2) the magnitude of the J → J+1 transition moment. The overall intensity of the lines depends on the vibrational transition dipole moment. 0000028542 00000 n Watch the recordings here on Youtube! Any transition (υ′′→υ′) has some definite probability . The overtone shown is vX = 11. the rotational quantum number in the ground state is one less than the rotational quantum number in the excited state – P branch (in French, pauvre or poor). H‰b```f``Uc`a``Šc`@ 6 daà˜ ä1) where \(\mu\) is the reduced mass (Equation \ref{reduced mass}) and \(r\) is the equilibrium bond length. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. From this relationship, we can also deduce that in heavier molecules, B will decrease because the moment of inertia will increase, and the decrease in the exponential factor is less pronounced. Lipids undergo temperature specific phase transitions from liquid crystalline to gel phase. In a perfect harmonic oscillator, this would occur at the exact same frequency as the v=0 to v=1 transition. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. This type of transition occurs in between different vibrational levels of the same electronic state. %PDF-1.3 %âãÏÓ The irreducible representations of vibrations vib vib = 3N- T- R =(3A 1 +A 2 +2B 1 +3B 2) - (A 1 +A 2 +2B 1 +2B 2) =2A 1 + B 2 Rovibrational spectra can be analyzed to determine average bond length. (b) Two photons drive the vibrational overtone, which is the spectroscopy transition. Experimentally, frequencies or wavenumbers are measured rather than energies, and dividing by \(h\) or \(hc\) gives more commonly seen term symbols, \(F(J)\) using the rotational quantum number \(J\) and the rotational constant \(B\) in either frequency, \[F(J)=\dfrac{E_r}{h}=\dfrac{h}{8\pi^2I} J(J+1)=BJ(J+1)\], \[\tilde{F}(J)=\dfrac{E_r}{hc}=\dfrac{h}{8\pi^2cI} J(J+1)=\tilde{B}J(J+1)\]. As stated, the AC is the sum of all the intensities of all the transitions, so the greater it is, the greater is the transition probability. Rotational and Vibration transitions (also known as rigid rotor and harmonic oscillator) of molecules help us identify how molecules interact with each other, their bond length as mentioned in previous section. 0000001038 00000 n Since electronic transitions are vertical, only transition A in Figure 2 occurs. transition contributes to a competitive decrease in Raman shift, most evident in the Raman shift de-crease of the symmetric stretching mode. Molecular coupling defines fundamental properties of materials, yet spectroscopic access and imaging have remained challenging due to the short length scales of order and disorder and the low energy scales of interactions. 0000002398 00000 n For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. A molecule's rotation can be affected by its vibrational transition because there is a change in bond length, so these rotational transitions are expected to occur. The rotational selection rule gives rise to an R-branch (when. In order to know each transitions, we have to consider other terms like wavenumber, force constant, quantum number, etc. Rotational–vibrational spectroscopy is a branch of molecular spectroscopy concerned with infrared and Raman spectra of molecules in the gas phase. every non-linear molecule has 3N-6 vibrations , where N is the number of atoms. We Journal of Materials Chemistry C HOT Papers There are rotational energy levels associated with all vibrational levels. Since vibrational energy states are on the order of 1000 cm -1, the rotational energy states can be superimposed upon the vibrational energy states. vibrational state by photoionization through the neutral d1Pg Rydberg state with (2 + 1) photons at 301 nm. Other transitions Vibrational transition probabilities in diatomic molecules are given by the square of off‐diagonal matrix elements of the molecular dipole‐moment function M (R). We achieved this goal using two-dimensional infrared vibrational echo spectroscopy to observe isomerization between the gauche and trans conformations of an ethane derivative, 1-fluoro-2 … Vibrational states and spectra of diatomic molecules. nitric oxide, NO. 0000003159 00000 n 10.13 Anharmonic Vibrational Frequencies. Thus, when, \[ \dfrac{d}{dJ} \left( \dfrac{N_J}{N_0} \right)=0\], \[J_{max}=\left(\dfrac{kT}{2hB}\right)^\dfrac{1}{2}-\dfrac{1}{2}\]. Each line of the branch is labeled R(J) or P(J), where J represents the value of the lower state Figure \(\PageIndex{1}\)). The validity of Born—Oppenheimer approximation is analyzed based on one-center method and B-spline basis sets. This results in the population distribution shifting to higher values of J. and a P-branch (when ∆J = -1). 0000004064 00000 n The vibrational spectrum of a transition state is characterized by one imaginary frequency (implying a negative force constant), which means that in one direction in nuclear configuration space the energy has a maximum, while in all other (orthogonal) directions 13.2: Rotational Transitions Accompany Vibrational Transitions, [ "article:topic", "Physical", "showtoc:no" ]. Vibrational transitions. 0000001318 00000 n o Molecular orbitals are labeled, ", #, $, … 0000002026 00000 n I have optimized a transition state at b3lyp/6-311++G(d,p) level in gaussian 03, but when I attempt to do frequency calculations, I have too big imaginary frequency value, about -73 (cm-1). vibrational level of the ground state to the highest vibrational level in the first excited state (denoted as S(0) = 1 to S(1) = 5). Transitions involving changes in both vibrational and rotational states can be abbreviated as rovibrational transitions. J" = 0 and J' = 0, but \(\nu_0 \neq 0\) is forbidden and the pure vibrational transition is not observed in most cases. In Figure \(\PageIndex{2}\), between \(P(1)\) and \(R(0)\) lies the zero gap, where the the first lines of both the P- and R-branch are separated by \(4B\), assuming that the rotational constant B is equal for both energy levels. John A. DeLuca General Electric Corporate Research and Development Center P.O. The selection rule for transitions for a harmonic oscillator comes in two parts. Box 8 Schenectady, NY 12301 An Introduction to Luminescence in Inorganic Solids When a solid absorbs photons or charged particles, a number of energy conversion processes are possible, as illus- trated in Figure 1. P branch Q branch R branch PY3P05 o Electronic transitions occur between molecular orbitals. As J increases, the degeneracy factor increases and the exponential factor decreases until at high J, the exponential factor wins out and NJ/N0 approaches zero at a certain level, Jmax. The selection rules for the vibrational transitions in a harmonic oscillator-like molecule are (57) As the energy difference between each two neighbor vibrational energy levels is (see eq. Probability, but is quite a significant component phosphorescence, and delayed fluorescence when they are satisfied, the.! We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739 angular. N is the spectroscopy transition contain only one line which is in fact vibrational transition exist in experimentally selection rules mean the... Electric-Quadrupole term, that give rise to an R-branch ( when ∆J = )... Have made the transition probability, but is quite a significant component more information contact us at info libretexts.org. Do exist in the lowest vibrational level of the ground state and thus small!, on the order of 1000 cm-1, the vibrational overtone, which is number!, quantum number, etc a first order saddle point on a potential energy surface PES... ) \ ) represents the energy of a molecule refers to the movement of the is. Zero gap is also where we would expect the Q-branch, depicted as the dotted line if! Υ′′→Υ′ ) has some definite probability an allowed transition, otherwise it is allowed levels need be! 1525057, and delayed fluorescence abbreviated as rovibrational transitions very weak ' forbidden ' transi-tions in their rovibrational.! States, i.e and 1413739 of molecular spectroscopy concerned with infrared and Raman spectra of with! Dipole‐Moment function M ( R ) is quite a significant component take place in the lowest vibrational of... Nondestructive electron infusion has not yet been realized have a small moment of inertia and thus small... Shift de-crease of the excited state transitions that occur with fluorescence, phosphorescence, 1413739! The exact same frequency as the v=0 to v=1 transition and rotational states be! And yield no Q-branch symmetric stretching mode DeLuca General electric Corporate Research Development!, that give rise to an R-branch ( when most diatomics, such as,... Ultra-Strong magnetic field are determined momentum and yield no Q-branch to consider other terms like wavenumber force. Q-Branch can be superimposed upon the vibrational energy level is significantly populated states are on the molecule. Info @ libretexts.org or check out our status page at https: //status.libretexts.org fact detected.... Occurs is referred to as T M and varies depending on the of... A harmonic oscillator ( ignoring anharmonicity ) that the transition ∆J = -1.... Would expect the Q-branch can be superimposed upon the vibrational spectrum would only... Rules mean that the transition the CaCl2-like phase of silica are also considerable... Https: //status.libretexts.org the irreducible representations correspond to translations and rotations of the rotational energy levels associated with vibrational. P-Branch ( when ∆J = +1 ) and a P-branch ( when =. Of atoms branch of molecular spectroscopy concerned with infrared and Raman spectra H2+... Spectroscopy transition this, vibrational transitions can couple with rotational transitions take place the. Bands result when the first vibrational energy states are on the other hand, chemical reactions may form in. Mean that the transition occur at the exact same frequency as the dotted line, if it is a of! Number, etc, most evident in the population distribution will shift towards higher of... And a P-branch ( when ∆J = -1 ) otherwise it is allowed a molecule refers to number... Molecules in high vibrational levels bulk MoS2 by nondestructive electron infusion has yet! An R-branch ( when a significant component, where N is the spectroscopy transition of molecules with electronic angular and. A first order saddle point on a potential energy surface ( PES ) us. Thus no vertical transition is said to be considered B-spline basis sets frequency as v=0... To translations and rotations of the molecule from one vibrational energy level significantly! By-Nc-Sa 3.0 and delayed fluorescence fact detected experimentally are also of considerable 1 transition states state,.... Line which is the spectroscopy transition the Q-branch, depicted as the dotted line, if it is allowed parameter. At the exact same frequency as the dotted line, if it a. There are rotational energy levels associated with all vibrational levels the order of 1000 cm-1, the transition =... = 1, status page at https: //status.libretexts.org the various electronic excited state that is displaced! That have made the transition from v=1 to v=2 can occur transition state is forbidden. Also where we would expect the Q-branch can be superimposed upon the vibrational overtone, which significantly... Possible to this state P-branch ( when ∆J = 0 and v = 1 DeLuca. Satisfied, the rotational energy states are on the other hand, in... 1246120, 1525057, and 1413739 Born—Oppenheimer approximation is analyzed based on method! This state overtone, which is in fact detected experimentally between v = 0 (.... Of each mode, which is in fact detected experimentally vibrational transition exist in rise to very weak forbidden... To translations and rotations of the molecule 's vibrations as those of a rigid rotor, ignoring distortion!
Manx Actress Meaning, News Watch 10 Live Providence, Ipl 2020 Uncapped Players List, Destiny 2 Hive Boss Culling Stasis, Lets Shave Company, Detective Investigation Files 3,
vibrational transition exist in 2021 | CommonCrawl |
Holocene El Niño–Southern Oscillation variability reflected in subtropical Australian precipitation
C. Barr ORCID: orcid.org/0000-0003-0436-87021,2,
J. Tibby ORCID: orcid.org/0000-0002-5897-29321,2,
M. J. Leng3,4,
J. J. Tyler2,5,
A. C. G. Henderson ORCID: orcid.org/0000-0002-5944-31356,
J. T. Overpeck7,
G. L. Simpson8,
J. E. Cole ORCID: orcid.org/0000-0002-3389-78099,
S. J. Phipps ORCID: orcid.org/0000-0001-5657-878210,
J. C. Marshall11,
G. B. McGregor11,
Q. Hua ORCID: orcid.org/0000-0003-0179-853912 &
F. H. McRobie13
An Author Correction to this article was published on 01 April 2021
This article has been updated
The La Niña and El Niño phases of the El Niño-Southern Oscillation (ENSO) have major impacts on regional rainfall patterns around the globe, with substantial environmental, societal and economic implications. Long-term perspectives on ENSO behaviour, under changing background conditions, are essential to anticipating how ENSO phases may respond under future climate scenarios. Here, we derive a 7700-year, quantitative precipitation record using carbon isotope ratios from a single species of leaf preserved in lake sediments from subtropical eastern Australia. We find a generally wet (more La Niña-like) mid-Holocene that shifted towards drier and more variable climates after 3200 cal. yr BP, primarily driven by increasing frequency and strength of the El Niño phase. Climate model simulations implicate a progressive orbitally-driven weakening of the Pacific Walker Circulation as contributing to this change. At centennial scales, high rainfall characterised the Little Ice Age (~1450–1850 CE) in subtropical eastern Australia, contrasting with oceanic proxies that suggest El Niño-like conditions prevail during this period. Our data provide a new western Pacific perspective on Holocene ENSO variability and highlight the need to address ENSO reconstruction with a geographically diverse network of sites to characterise how both ENSO, and its impacts, vary in a changing climate.
The El Niño-Southern Oscillation (ENSO) describes variation in tropical Pacific Ocean temperatures and the resulting changes in atmospheric pressure gradients. The atmospheric changes widely propagate the effects of ENSO variability, making ENSO a major component of regional climate across much of the world1. The impacts of changes in regional temperature and precipitation patterns associated with El Niño and La Niña phases of ENSO have wide-ranging environmental, societal and economic consequences. The El Niño phase manifests as a warming of central and/or eastern Pacific sea surface temperature (SST) with resulting increased precipitation in northern South America and western North America (Fig. 1). Conversely, the associated cooling in the western Pacific during El Niño events is associated with drought, forest fires and reduced agricultural yield in the western tropical Pacific, including the eastern half of Australia2. The opposing La Niña phase is equally important as a driver of drought in the eastern Pacific and positive precipitation anomalies in the west Pacific2. This was most recently evident during the 2010/11 La Niña, when the volume of precipitation over land was sufficient to reduce global sea levels by 5 mm, with much of this falling on Australia3. This resulted in catastrophic flooding in the sub-tropics and massive carbon uptake via greening of the vast arid and semi-arid regions of the continent4,5.
ENSO influence on surface precipitation. Spatial correlation between mean precipitation (Nov–Oct: the local hydrological year17) across the greater ENSO region and mean sea surface temperature for the Nino3.4 region (box) for the period 1980–2016 CE. Location of the study site and other locations mentioned in the text are illustrated.
Given these wide-ranging effects, it is essential to understand how both phases of ENSO will respond to future climate change. Reducing predictive model uncertainties requires proxy data of ENSO behaviour under different background states, as well as in response to local and extra-regional influences from all ENSO-sensitive areas6,7. The Holocene provides fruitful opportunities for this, with millennial-scale changes in orbital radiation forcing and centennial-scale global temperature changes, such as the Little Ice Age (~1450–1850)8. However, the evolution of ENSO through the Holocene remains unclear, with discrepancies between central Pacific SST proxies9 and eastern Pacific proxies of both precipitation10 and SST11, particularly during the mid-Holocene. Additionally, there are very few proxy ENSO records that can resolve centennial-scale trends in changing ENSO mean state. This is important as changes in the dominant phase of ENSO have been linked to solar irradiance12, orbital forcing13, average global temperatures14 and fresh water fluxes in the North Atlantic7.
We present a new ~7700-year quantitative precipitation record from subtropical eastern Australia, where La Niña and El Niño conditions are associated with positive and negative rainfall anomalies, respectively2 (Fig. 1). The precipitation reconstruction is derived from the carbon isotope ratio (δ13C) of leaves from the evergreen tree Melaleuca quinquenervia ((Cav.) S.T. Blake) preserved in the Holocene sediments of Swallow Lagoon on Minjerribah (North Stradbroke Island), the world's second largest sand island. Swallow Lagoon (27°29′55″S: 153°27′17″E) is a small (0.27 ha), perched, freshwater lake that is isolated from the regional water table15. With no inflow or outflow streams, the balance of precipitation over evaporation determines lake level (Fig. S1) and moisture availability for the isolated stand of M. quinquenervia that fringes the lake (Supplementary Information). Sediments from a 370 cm core were sieved at contiguous one-centimetre resolution for M. quinquenervia leaf fragments, yielding 284 samples. Each datum represents the δ13C of all leaf fragments at that depth and is an average for the period encapsulated by that centimetre of sediment, which ranges from two to 77 years (avg. 24.4 yrs; s.d. 15.6 yrs). As such, these data do not represent El Niño or La Niña events, but represent mean conditions of individual time-slices. Age control is provided by 18 accelerator mass spectrometry 14C dates on short-lived terrestrial macrofossils, including M. quinquenervia leaves (Table S1).
Our new rainfall reconstruction builds on a well-established relationship between carbon isotope fractionation in C3 plant leaves and moisture availability (e.g., ref.16). In a novel approach, we utilise a relationship established specifically for M. quinquenervia using a 12-year collection of monthly litterfall samples from a nearby south-east Queensland wetland, which demonstrated a linear relationship (r2 = 0.67, p = 0.002) between the carbon isotope discrimination of M. quinquenervia leaves, relative to atmosphere, and mean annual rainfall17. We apply this calibration to sub-fossil M. quinquenervia leaf fragments from Swallow Lagoon to derive a quantitative estimate of mean annual rainfall. The linear nature of the model may skew precipitation estimates to the lower end and affect apparent variability; however, our calibration has advantages over other potential datasets as it uses location-specific climate data and is species-specific, as opposed to using modelled rainfall estimates16 or averaged data from all C3 plants at a location18. Comparing our results against various reconstructions from global datasets demonstrate that they consistently reconstruct higher precipitation estimates, however the patterns of change and variance, although accentuated, do not differ and our findings based on the species-specific calibration remain robust (Fig. S4).
The inferred rainfall record from Swallow Lagoon covers the last 7700 years (Fig. 2) and displays a transition from predominantly high precipitation with low frequency variability during the mid-Holocene, to a drier climate with enhanced centennial-scale variability after ca. 3200 cal yr before present (3.2 cal kyr BP, where 'present' is 1950 CE). However, both non-constant sampling through time and varying numbers of years per sample could affect the variability in our record. To assess the fidelity of this shift in variability, we use a generalized additive location scale model (GAM-LS) to simultaneously estimate trends in both the mean (μ) and the standard deviation (σ) of the rainfall record. We find a statistically significant trend in σ. To test if this trend was influenced by sampling resolution, the estimated model was tested against a null model using 1000 simulated time series that follow the nonlinear trend estimated by our GAM-LS model but importantly with constant variance. This process demonstrates the range of trends in σ we might expect if there were no systematic change in variance. Simulation results demonstrate the estimated trend in σ is not an artefact arising from varying sampling resolution in time (Fig. S5). The combination of the fidelity of the variability in the record, the similarities between this and the general pattern of Holocene ENSO variability seen in other proxy records11,19,20,21, and the ENSO-sensitive location of the study region, provide confidence that rainfall variability in the record reflects ENSO variability through the Holocene. We therefore interpret the record in terms of mean ENSO conditions of individual time slices, as discussed above. Alternative explanations involving changes in ENSO 'flavour'22 or shifting teleconnection patterns23,24 are not as firmly grounded in the palaeoclimate literature, although we cannot rule these out.
The Swallow Lagoon precipitation record. (a) The number of individual samples per non-overlapping century. (b) The Swallow Lagoon rainfall reconstruction with standard error (±88 mm; grey shading) and generalised additive location-scale model (GAM-LS: orange line with 95% confidence level shaded) illustrating significant trends in the data. Horizontal black dashed lines indicate ±2σ of the record, dotted line is the mean (1742 mm). (c) Standard deviation (σ) in mean annual rainfall with 95% confidence level shaded (see methods).
The nature of Holocene ENSO variance remains a subject of debate; central Pacific coral records suggesting no change in variance9 contrast with eastern Pacific ENSO proxies indicating enhanced variance in the late Holocene11,19,25. A recent analysis of coral and mollusc δ18O records from across the Pacific concluded marked changes in variance are in fact evident between the middle and late Holocene, and that tropical Pacific climate was susceptible to millennial-scale quiescent periods unrelated to orbital forcing20. In contrast, model simulations suggest the discrepancies between variability in eastern and central Pacific palaeoclimate data may be due to a differential response to insolation changes driven by orbital forcing22. Our record provides a new perspective from the southwestern Pacific and clearly demonstrate marked changes in rainfall variability over the last ~7700 years (Fig. 2). Prior to 5 cal kyr BP, variability is low, before a gradual increase ~5–2.5 cal kyr BP, and a further increase from ~1.2 cal kyr BP to present. The similarities between the timing of onset and trends in the variance between the eastern and western Pacific SSTs and teleconnected precipitation (Fig. 3) imply a common forcing mechanism.
(a) Swallow Lagoon precipitation record (as per Fig. 2); (b) West Pacific warm pool SST32; (c) lake sediment sand content from El Junco Lake, Galápagos Islands19, and (d) sediment deposition at Laguna Pallcacocha, Ecuador25, as proxies for El Niño event frequency; (e) simulated amplitude of ENSO variability as reflected by Nino3.4 SST variability with 95% confidence interval shaded and, (f) simulated strength of the Pacific Walker circulation in the Nino4 region with 95% confidence interval shaded, according to the CSIRO Mk3L climate system model (see methods); (g) Standard deviation in mean annual rainfall record from Swallow Lagoon (as per Fig. 2c); eastern tropical Pacific measures of ENSO variability derived from (h) variance of individual foraminifera11 (grey bars; original sample at 7 cal kyr BP not shown as it is considered spurious by the authors), (i) bivalves63 (blue boxes), and (j) Laguna Pallcacocha variance, in 100-year non-overlapping windows, derived from normal-transformed data64 (black solid line). Inverted triangles represent the location of radiocarbon ages in the Swallow Lagoon record. LIA: Little Ice Age.
Model simulations at 6 kyr and 0 kyr identify a strengthened Pacific Walker Circulation (PWC), driven by high boreal summer insolation and a stronger monsoon system, as the primary driver of reduced ENSO variability evident in proxy records during the mid-Holocene7,26,27. In this setting, strengthened trade winds foster more La Niña-like conditions and restrict the formation of El Niño events. To investigate the evolution of this scenario over time, we expand on these simulations using nine equilibrium climate model simulations spanning 8 kyr to 0 kyr, and derive metrics for the amplitude of ENSO variability and strength of the PWC (methods). Each simulation consists of a 1200-year model simulation (with the last 1000 years being used for analysis) and differs only via changes in the Earth's orbital parameters. The model reproduces the long-term trends in ENSO variability over the last 8000 years seen in proxy records, with lower variability during the mid-Holocene (8–5 kyr) and gradually increasing late Holocene variability (Fig. 3). Modelled PWC strength suggests it reached a peak at 5 kyr, before decreasing towards 0 kyr. However, there is little difference between simulations either side of this peak, with the largest changes evident after 3 kyr, mirroring the pattern of rainfall variability at Swallow Lagoon (Fig. 3). Though it is likely that other factors beyond orbital forcing also influence ENSO, and the PWC, during the Holocene28, the simulations provide a mechanistic explanation for the coeval changes noted in proxy-ENSO records and rainfall in the Australian subtropics.
As our record tracks both wet and dry anomalies, we can characterise the shift at 3.2 cal kyr BP in terms of changes in the distribution as well as the amplitude of extremes. Prior to 3.2 cal kyr BP, no events exceeded ±2σ of the record mean; after 3.2 cal kyr BP, there are 12 dry excursions greater than 2σ, but only one wet excursion of this magnitude. While an increase in resolution towards the top of the record will naturally lead to the preservation of more short-lived events, we note that dry anomalies dominate and the transition towards an overall drier mean state, as illustrated by the GAM-LS (Fig. 2b), remains evident when the data are interpolated to a common centennial scale (Fig. S6). This trend suggests that the enhanced amplitude of late Holocene variability evident at Swallow Lagoon, and in other equatorial Pacific palaeoclimate records19,20,21,25, is driven by increasing strength of the El Niño phase alone, rather than simply a more variable system. While this has previously been implied from a marked shift in vegetation across eastern Australia towards more drought tolerant species around 3 cal kyr BP29, the Swallow Lagoon record confirms the one-sided nature of late-Holocene ENSO intensification.
The mid-Holocene (~7.7–3.5 cal kyr BP) at Swallow Lagoon is dominated by precipitation estimates above the mean (analogous to La Niña conditions in the instrumental record) though some dry periods are evident. The most extensive of these are apparent around 6.9, 6.8 and 5.8 cal kyr BP, suggesting El Niño was still active at this time. During the period 5.5–3.5 cal kyr BP, rainfall at Swallow Lagoon was generally stable, around a wet La Niña-like mean state. This period closely corresponds with a time of low variance in eastern Pacific SSTs11 from ~6–4 cal kyr and when Galápagos lake sediments suggest both phases of ENSO were less frequent21. Though some temporal smoothing is expected in the Galápagos record, as well as at Swallow Lagoon, the timing is also in general agreement with a "quiescent period" evident in high-resolution carbonate δ18O records from discrete periods between 5–3 cal kyr BP20. The GAM-LS model illustrates a period of very high rainfall at Swallow Lagoon around 3.5–3.0 cal kyr BP, which corresponds with a marked cool and dry period reflected in Galápagos lake sediments, also at 3.5–3.0 cal kyr BP21. Taken together, these findings suggest a centennial-scale period of enhanced zonal SST gradient, a persistently strong PWC and a more La Niña-like mean state.
The shift towards drier climates at Swallow Lagoon aligns with increasing SST variability in the eastern equatorial Pacific11 and the onset of more frequent El Niño events evident in sediment records from the Galápagos19 and Ecuador25 (Fig. 3; although we note the veracity of the Ecuadorean record in documenting El Niño events has recently been challenged30). Enhanced El Niño conditions in the west Pacific warm pool are evident in discreet coral records from Papua New Guinea around this time31, with notable prolonged and extreme events at ~2.5 and 2.04 ka corresponding with dry periods that exceed 2σ of the Swallow Lagoon record around 2.47 and 2.04 cal kyr BP. These events at Swallow Lagoon occur in a cluster of dry events during the ~2.6–2.0 cal kyr BP period, suggesting prolonged or extreme El Niño events, such as those evident in the coral records, may have occurred more regularly during this time. An absence of long coral records from the west Pacific precludes precise correlation with subsequent late-Holocene dry extremes in the Swallow Lagoon record, though a general agreement between rainfall trends (as illustrated by the GAM-LS) and west Pacific warm pool SSTs32 is evident through this period (Fig. 3).
A notable exception to the drier and more variable climate in the late Holocene at Swallow Lagoon is the stable high rainfall phase during the Little Ice Age (LIA: ~1450–1850)8, a period of globally cool temperatures14,33. ENSO variability during the LIA has been debated in recent research8,14,34,35,36. Problems in interpretation arise because of the heterogeneous relationship among terrestrial hydroclimate proxies, oceanic SST proxies and theoretical and physical models of predicted responses to globally cool periods36,37,38. A strengthened zonal gradient is indicated by hydrological records of a generally dry eastern Pacific19,21,25 contrasting with a wet western Pacific34,36, whereas a weakened zonal gradient is indicated by proxy records of relatively cool eastern and western Pacific SSTs8,39. The Swallow Lagoon record indicates persistently high rainfall during the LIA (Fig. 3). This is consistent with lake40 and tree-ring41 records from southern Australia that also find wet and low-variability LIA climate, which is inconsistent with El Niño-like conditions14. However, dry climate in northern Australia during the LIA37 is inconsistent with La Niña-like conditions. Thompson et al. (ref.21) suggest the pattern of reduced SST gradient described above is reminiscent of El Niño Modoki conditions; these can drive large-scale decreases in precipitation over northern Australia42, although they are unable to explain a wet southeastern40 or subtropical Australia (Swallow Lagoon). Given the critical impacts ENSO has on water resources in teleconnected regions, understanding this apparent disparity between SST and hydroclimate proxies highlights the need for further research into the response of ENSO to changes in global climate.
Understanding ENSO variability is critically important because of its effects on precipitation regimes in teleconnected regions. The Swallow Lagoon precipitation record provides a new, quantitative, southwestern Pacific perspective of the influence of both ENSO phases over the mid- to late Holocene. The record has enabled, for the first time, an assessment of centennial- to millennial-scale variability in Australian subtropical rainfall. The pattern of low variability during the mid-Holocene, increasing after ca. 3 kyr cal BP, mirrors the variance evident in ENSO records from across the Pacific. The ~7.7–3.5 kyr cal BP period of low variability is characterised by predominantly wet climates at Swallow Lagoon, which implies a dominance of the La Niña phase during this time. After ~3 kyr cal BP, increasing variability is driven by the occurrence of extreme dry events, highlighting a strengthened El Niño phase as the primary driver of this change. Our climate model simulations implicate a progressive orbitally-driven weakening of Pacific Walker Circulation, particularly after 3 ka, as a contributing factor. At centennial scales, the record presents the first insights into subtropical Australian hydroclimates during the LIA, and we find that persistently high rainfall marks this period as anomalous in the context of the late Holocene. This contributes to a complex picture in which there is an apparent decoupling of SST and terrestrial hydroclimates during this interval. This requires further investigation as understanding ENSO response to radiative forcing is key to understanding the sensitivity of the system to anthropogenic climate changes35.
An age model was developed based on 18 accelerator mass spectrometry (AMS) radiocarbon dates on short-lived terrestrial macrofossils (Table S1, Fig. S2). The samples were treated using the standard acid-alkali-acid pre-treatment to remove all carbon contamination. A radiocarbon chronology (Fig. S2) was constructed using the (Bayesian) OxCal P_Sequence deposition model with a low k parameter of 0.5 cm−1. The agreement index of the model Amodel of 76% was good as it is higher than the accepted level of 60%43. Radiocarbon calibration data used for calendar age conversion are the post-bomb 14C data for Southern Hemisphere zone 1–244 extended back in time by the SHCal13 calibration curve45. All calibrated ages are reported in cal. yr BP, with 0 yr BP being 1950 CE.
Isotope analysis
All leaf samples were freeze dried for 24 hours and ground to a homogenous fine powder. Carbon isotope analyses were performed at the Natural Environment Research Council's Isotope Geosciences Laboratory at the British Geological Survey in Nottingham, United Kingdom, by combustion in a Costech Elemental Analyser on-line to a VG TripleTrap and Optima dual-inlet mass spectrometer. δ13C values were calculated relative to the Vienna Pee-Dee Belemnite (VPDB) scale using within-run laboratory standards calibrated against NBS-18, NBS-19 and NBS-22. Replicate analysis of well-mixed samples indicated a precision of < 0.1‰ (1 SD).
Precipitation reconstruction
There is a significant relationship (r2 = 0.64) between annual mean M. quinquenervia Δleaf sensu Farquhar et al. (ref.46) and mean annual rainfall17 which is improved slightly by taking into account the effect of atmospheric CO2 changes on Δleaf. Δleaf in the Swallow Lagoon record was calculated using the atmospheric δ13Catm from Elsig et al. (ref.47) between 7520 cal yr BP and 550CE and δ13Catm for the remaining period calculated using Ferrio et al. (ref.48). We inferred rainfall using the relationship between rainfall and a discrimination anomaly17, that is, the difference between Δleaf predicted using Farquhar et al. (ref.46) and that predicted from CO2 using Schubert and Jahran (ref.49). We utilised CO2 data from Monin et al. (ref.50) for the period to 245 cal yr BP, from Law Dome51 from 245 to −20 cal yr BP (1970 CE) and from Cape Grim (www.csiro.au) from 1971 CE to present.
Assessing variability
To simultaneously estimate trends in the mean and variance of the reconstructed rainfall time series requires a modelling approach that allows for linear predictors for each parameter of the conditional distribution of the response. Therefore, we chose to analyse the time series using a location scale generalized additive model (GAM-LS). Models of this type are also contained in the generalised additive model of location, scale, shape (GAMLSS) class52 and are more generally known as distributional models (e.g., ref.53). Our GAM-LS model includes smooth functions of time to simultaneously estimate trends in mean and variance of the observed time series. Because the models use smooth functions to estimate trends they do not require the response variable to be regularly spaced in time54,55. Furthermore, they do not suffer from edge effects in the same way as moving window methods, thus allowing continuous estimates of changes in variance over the entire time series. Edge effects do lead to increased uncertainty in trend estimates from GAM-LS models, but this additional uncertainty is accounted for in the standard errors of estimates from the model which are used to produce credible intervals for the estimated trends.
Rainfall values per year are typically observations of continuous random variables, bounded at 0. However, given the large rainfall values observed here, the Gaussian distribution is a close approximation for their conditional distribution. The Gaussian distribution is defined by two parameters; the mean (μ) and the standard deviation (σ). The GAM-LS approach allows both parameters to be modelled via separate linear predictors to capture variation in both the mean and the variance of the time series. The specific GAM-LS fitted here has the following form:
$$\begin{array}{rcl}{y}_{i} & \sim & N({\mu }_{i},{\sigma }_{i}^{2})\\ {\mu }_{i} & = & \alpha +{f}_{1}(tim{e}_{i})\\ {log}({\sigma }_{i}-b) & = & \gamma +\beta t{i}_{i}+{f}_{2}(tim{e}_{i})\end{array}$$
which states that the ith δ13C observation (yi) is distributed Gaussian with mean µi and variance \({\sigma }_{i}^{2}\). We model µi as a smooth function (f1) of time (calibrated radiocarbon years BP) plus a constant term α (the model intercept). The linear predictor for σi is also modelled as a constant term, γ, plus a smooth function of time (f2), plus a linear parametric effect of the amount of time represented by each sample (tii). σi is modelled on the log scale with a small lower bound b (0.01) to ensure parameter estimates remain positive and avoid issues with singularities in the likelihood of the model56. Any deviation from the assumed Gaussian distribution was assessed using standard model diagnostic plots for GAMs.
Thin plate spline bases were used for both smooth functions f1 and f2 with 200 and 75 basis functions respectively, to allow for potentially complex fitted trends in mean and variance. A penalty on the second derivative of the fitted smooths was used to control the amount of wiggliness in the estimated functions. Smoothness parameters, used to balance the fit and complexity of the model, were estimated via penalized maximum likelihood57. This has the effect of slightly biasing downwards the estimated variance trend. Dropping f2 from the model allows a test to be performed for a trend in variance over and above that which we might expect to observe due to varying sedimentation rates and time averaging present in each sample. Additionally, AIC was used to select between these two models.
Periods of significant change in the estimated trend in σi were identified using the first derivatives of f2, calculated using the method of finite differences. Periods of significant change exist where the 95% confidence interval on the first derivative of the smooth does not include a value of zero slope.
To investigate whether the estimated trend in σi was the result of variation in sedimentation rates, time averaging, and the uneven spacing of samples in time (not already accounted for by the tii term in the model), we compared our estimated variance trend with those estimated from 1000 null models fitted to simulated time series. Simulated time series \(({\tilde{y}}_{i})\) were generated at an annual time step according to \({\tilde{y}}_{i}=N({\hat{\mu }}_{i},{\sigma }^{2})\), where \({\hat{\mu }}_{i}\) is the predicted trend from the full GAM-LS model described above but with constant standard deviation σ, determined from the standard error of the residuals of the GAM-LS fitted without f2 (hence we account for the heterogeneity implied by the varying amounts of time averaging in each sample when selecting the value of σ). Each annual value was assigned to a cm sediment slice following the age-depth relationship of the observed sediment record, and averaged to provide a mean value of the response for each slice. This process approximates the time averaging of individual years in the observed record. We then fitted the same GAM-LS model to each simulated series, including the smooth f2; any trend in σi estimated by f2 would be spurious, the result of either stochastic variation or the time averaging process, because the data were initially simulated with constant variance. We then compared the observed trend in σi with those estimated from the simulated time series (Fig. S4), and reject the null hypothesis that the observed trend in σi is a data artefact if it is extreme relative to the trends in the simulated series which were generated with no trend in variance. The GAM-LS models were estimated using the mgcv package57 for R58.
Climate modelling
Snapshot simulations of the state of the global climate at 8, 7, 6, 5, 4, 3, 2 and 1 kyr BP, conducted using the CSIRO Mk3L climate system model v1.159,60, were used. A pre-industrial control simulation provides the state of the global climate at 0 kyr BP. The snapshot simulations took into account the effects of orbital forcing, with the model being driven with the appropriate values of the Earth's orbital parameters61 for each epoch. Otherwise, each snapshot simulation was identical to the pre-industrial control, with an atmospheric CO2 concentration of 280 ppm and a solar constant of 1365 Wm−2. The snapshot simulations were initialised from the state of the pre-industrial control simulation at the end of model year 100. Each snapshot simulation was then integrated for 1,100 years. The first 100 years were regarded as a spin-up period, with the final 1,000 years being used to derive statistics. The simulations are described in further detail by Phipps and Brown (ref.62).
For this study, two metrics were derived: (i) the amplitude of variability in El Niño-Southern Oscillation (ENSO), and (ii) the strength of the Pacific Walker Circulation. The amplitude of ENSO variability was diagnosed by calculating the monthly-mean sea surface temperature (SST) for the Niño 3.4 region (170–120°W, 5°S–5°N). A 2–7 year bandpass filter was then applied to extract ENSO variability. The ENSO amplitude was derived by calculating the standard deviation of the bandpass-filtered SST. The strength of the Pacific Walker circulation was diagnosed by calculating the monthly-mean strength of the zonal wind at 850 hPa for the Niño 4 region (160°E–150°W, 5°S–5°N). For both metrics, a block bootstrap was used to derive the 95% confidence interval.
Data relating to this study can be found at https://figshare.com/s/b4b5431fd9577afd95ef.
A Correction to this paper has been published: https://doi.org/10.1038/s41598-021-86998-2
McPhaden, M. J., Zebiak, S. E. & Glantz, M. H. ENSO as an integrating concept in Earth science. Science 314, 1740–1745, https://doi.org/10.1126/science.1132588 (2006).
ADS CAS Article PubMed Google Scholar
Dai, A. & Wigley, T. M. L. Global patterns of ENSO-induced precipitation. Geophysical Research Letters 27, 1283–1286 (2000).
ADS Article Google Scholar
Boening, C., Willis, J. K., Landerer, F. W., Nerem, R. S. & Fasullo, J. The 2011 La Niña: So strong, the oceans fell. Geophysical Research Letters 39, https://doi.org/10.1029/2012gl053055 (2012).
Cleverly, J. et al. The importance of interacting climate modes on Australia's contribution to global carbon cycle extremes. Scientific Reports 6, 1–10, https://doi.org/10.1038/srep23113 (2016).
Poulter, B. et al. Contribution of semi-arid ecosystems to interannual variability of the global carbon cycle. Nature 509, 600–603, https://doi.org/10.1038/nature13376 (2014).
Brown, J., Collins, M., Tudhope, A. W. & Toniazzo, T. Modelling mid-Holocene tropical climate and ENSO variability: towards constraining predictions of future change with palaeo-data. Climate Dynamics 30, 19–36, https://doi.org/10.1007/s00382-007-0270-9 (2008).
Braconnot, P., Luan, Y., Brewer, S. & Zheng, W. Impact of Earth's orbit and freshwater fluxes on Holocene climate mean seasonal cycle and ENSO characteristics. Climate dynamics 38, 1081–1092 (2012).
Rustic, G. T., Koutavas, A., Marchitto, T. M. & Linsley, B. K. Dynamical excitation of the tropical Pacific Ocean and ENSO variability by Little Ice Age cooling. Science 350, 1537–1541 (2015).
ADS CAS Article Google Scholar
Cobb, K. M. et al. Highly Variable El Niño–Southern Oscillation Throughout the Holocene. Science 339, 67–70, https://doi.org/10.1126/science.1228246 (2013).
Rein, B., Luckge, A. & Sirocko, F. A major Holocene ENSO anomaly during the Medieval period. Geophysical Research Letters 31, L17211 (2004).
Koutavas, A. & Joanides, S. El Niño-Southern Oscillation extrema in the Holocene and Last Glacial Maximum. Paleoceanography 27, PA4208, https://doi.org/10.1029/2012PA002378 (2012).
Emile-Geay, J., Cane, M., Seager, R., Kaplan, A. & Almasi, P. El Niño as a mediator of the solar influence on climate. Paleoceanography 22, PA3210, https://doi.org/10.1029/2006PA001304 (2007).
Clement, A. C., Seager, R. & Cane, M. A. Suppression of El Niño during the mid-Holocene by changes in the Earth's orbit. Paleoceanography 15, 731–737 (2000).
Mann, M. E. et al. Global Signatures and Dynamical Origins of the Little Ice Age and Medieval Climate Anomaly. Science 326, 1256–1260, https://doi.org/10.1126/science.1177303 (2009).
Leach, L. M. Hydrological and physical setting of North Stradbroke Island. Proceedings of the Royal Society of Queensland 117, 21–46 (2011).
Diefendorf, A. F., Mueller, K. E., Wing, S. L., Koch, P. L. & Freeman, K. H. Global patterns in leaf 13C discrimination and implications for studies of past and future climate. Proceedings of the National Academy of Sciences 107, 5738–5743, https://doi.org/10.1073/pnas.0910513107 (2010).
Tibby, J. et al. Carbon isotope discrimination in leaves of the broad-leaved paperbark tree, Melaleuca quinquenervia, as a tool for quantifying past tropical and subtropical rainfall. Global Change Biology 22, 3474–3486 (2016).
Kohn, M. J. Carbon isotope compositions of terrestrial C3 plants as indicators of (paleo)ecology and (paleo)climate. Proceedings of the National Academy of Sciences of the United States of America 107, 19691–19695 (2010).
Conroy, J. L., Overpeck, J. T., Cole, J. E., Shanahan, T. M. & Steinitz-Kannan, M. Holocene changes in eastern tropical Pacific climate inferred from a Galápagos lake sediment record. Quaternary Science Reviews 27, 1166–1180 (2008).
Emile-Geay, J. et al. Links between tropical Pacific seasonal, interannual and orbital variability during the Holocene. Nature Geoscience 9, 168–173, https://doi.org/10.1038/NGEO2608 (2016).
Thompson, D. M. et al. Tropical Pacific climate variability over the last 6000 years as recorded in Bainbridge Crater Lake, Galápagos. Paleoceanography (2017).
Karamperidou, C., Di Nezio, P. N., Timmermann, A., Jin, F. F. & Cobb, K. M. The response of ENSO flavors to mid‐Holocene climate: implications for proxy interpretation. Paleoceanography 30, 527–547 (2015).
Cai, W., Van Rensch, P., Cowan, T. & Sullivan, A. Asymmetry in ENSO Teleconnection with Regional Rainfall, Its Multidecadal Variability, and Impact. Journal of Climate 23, 4944–4955 (2010).
Westra, S., Renard, B. & Thyer, M. The ENSO–precipitation teleconnection and its modulation by the interdecadal Pacific oscillation. Journal of Climate 28, 4753–4773 (2015).
Moy, C. M., Seltzer, G. O., Rodbell, D. T. & Anderson, D. M. Variability of El Nino/Southern Oscillation activity at millennial timescales during the Holocene epoch. Nature 420, 162–165 (2002).
Liu, Z., Kutzbach, J. & Wu, L. Modeling climate shift of El Nino variability in the Holocene. Geophysical Research Letters 27, 2265–2268 (2000).
Zheng, W., Braconnot, P., Guilyardi, E., Merkel, U. & Yu, Y. ENSO at 6ka and 21ka from ocean–atmosphere coupled model simulations. Climate Dynamics 30, 745–762 (2008).
Pausata, F. S. et al. Greening of the Sahara suppressed ENSO activity during the mid-Holocene. Nature Communications 8 (2017).
Donders, T. H., Haberle, S. G., Hope, G., Wagner, F. & Visscher, H. Pollen evidence for the transition of the Eastern Australian climate system from the post-glacial to the present-day ENSO mode. Quaternary Science Reviews 26, 1621–1637 (2007).
Schneider, T., Hampel, H., Mosquera, P. V., Tylmann, W. & Grosjean, M. Paleo-ENSO revisited: Ecuadorian Lake Pallcacocha does not reveal a conclusive El Niño signal. Global and Planetary Change, https://doi.org/10.1016/j.gloplacha.2018.06.004 (2018).
McGregor, H. & Gagan, M. Western Pacific coral δ18O records of anomalous Holocene variability in the El Niño–Southern Oscillation. Geophysical Research Letters 31 (2004).
Stott, L., Cannariato, K., Thunell, R., Huag, G. H., Koutavas, A. & Lund, S. Decline of surface temperature and salinity in the western tropical Pacific Ocean in the Holocene epoch. Nature 431, 56–59 (2004).
Marcott, S. A., Shakun, J. D., Clark, P. U. & Mix, A. C. A Reconstruction of Regional and Global Temperature for the Past 11,300 Years. Science 339, 1198–1201, https://doi.org/10.1126/science.1228026 (2013).
Yan, H. et al. A record of the Southern Oscillation Index for the past 2,000 years from precipitation proxies. Nature Geoscience 4, 611–614, https://doi.org/10.1038/ngeo1231 (2011).
Emile-Geay, J., Cobb, K. M., Mann, M. E. & Wittenberg, A. T. Estimating Central Equatorial Pacific SST Variability over the Past Millennium. Part II: Reconstructions and Implications. Journal of Climate 26, 2329–2352, https://doi.org/10.1175/Jcli-D-11-00511.1 (2013).
Griffiths, M. L. et al. Western Pacific hydroclimate linked to global climate variability over the past two millennia. Nature communications 7 (2016).
Yan, H. et al. Dynamics of the intertropical convergence zone over the western Pacific during the Little Ice Age. Nature Geoscience 8, 315 (2015).
Clement, A. C., Seager, R., Cane, M. A. & Zebiak, S. E. An Ocean Dynamical Thermostat. Journal of Climate 9, 2190–2196 (1996).
Conroy, J. L., Overpeck, J. & Cole, J. E. El Niño/Southern Oscillation and changes in the zonal gradient of tropical Pacific sea surface temperature over the last 1.2 ka. PAGES news 18, 32–34 (2010).
Barr, C. et al. Climate variability in south-eastern Australia over the last 1500 years inferred from the high-resolution diatom records of two crater lakes. Quaternary Science Reviews 95, 115–131 (2014).
Cook, E. R., Buckley, B. M., D'Arrigo, R. D. & Peterson, M. J. Warm-season temperatures since 1600 BC reconstructed from Tasmanian tree rings and their relationship to large-scale sea surface temperature anomalies. Climate Dynamics 16, 79–91 (2000).
Taschetto, A. S. & England, M. H. El Niño Modoki impacts on Australian rainfall. Journal of Climate 22, 3167–3174 (2009).
Bronk Ramsey, C. Deposition models for chronological records. Quaternary Science Reviews 27, 42–60 (2008).
Hua, Q., Barbetti, M. & Rakowski, A. Z. Atmospheric Radiocarbon for the Period 1950–2010. Radiocarbon 55, 2059–2072 (2013).
Hogg, A. G. et al. SHCal13 Southern Hemisphere Calibration, 0–50,000 Years cal BP. Radiocarbon 55, 1889–1903 (2013).
Farquhar, G. D., Ehleringer, J. R. & Hubick, K. T. Carbon isotope discrimination and photosynthesis. Annual Review of Plant Physiology and Plant Molecular Biology 40, 503–537 (1989).
Elsig, J. et al. Stable isotope constraints on Holocene carbon cycle changes from an Antarctic ice core. Nature 461, 507–510, https://doi.org/10.1038/nature08393 (2009).
Ferrio, J. P., Araus, J. L., Buxó, R., Voltas, J. & Bort, J. Water management practices and climate in ancient agriculture: inference from the stable isotope composition of archaeobotanical remains. Vegetation History and Archaeobotany 14, 510–517 (2005).
Schubert, B. A. & Jahren, A. H. The effect of atmospheric CO2 concentration on carbon isotope fractionation in C3 land plants. Geochimica et Cosmochimica Acta 96, 29–43, https://doi.org/10.1016/j.gca.2012.08.003 (2012).
Monnin, E. et al. Evidence for substantial accumulation rate variability in Antarctica during the Holocene, through synchronization of CO2 in the Taylor Dome, Dome C and DML ice cores. Earth and Planetary Science Letters 224, 45–54 (2004).
Etheridge, D. M. et al. Natural and anthropogenic changes in atmospheric CO2 over the last 1000 years from air in Antarctic ice and firn. Journal of Geophysical Research: Atmospheres 101, 4115–4128, https://doi.org/10.1029/95jd03410 (1996).
Rigby, R. A. & Stasinopoulos, D. M. Generalized additive models for location, scale and shape. Journal of the Royal Statistical Society: Series C (Applied Statistics) 54, 507–554 (2005).
MathSciNet MATH Google Scholar
Klein, N., Kneib, T., Lang, S. & Sohn, A. Bayesian structured additive distributional regression with an application to regional income inequality in Germany. The Annals of Applied Statistics 9, 1024–1052 (2015).
MathSciNet Article Google Scholar
Bunting, L. et al. Increased variability and sudden ecosystem state change in Lake Winnipeg, Canada, caused by 20th century agriculture. Limnology and Oceanography 61, 2090–2107 (2016).
Simpson, G. L. & Anderson, N. Deciphering the effect of climate change and separating the influence of confounding factors in sediment core records using additive models. Limnology and Oceanography 54, 2529–2541 (2009).
Wood, S. N. Generalized additive models: an introduction with R. (CRC press, 2017).
Wood, S. N., Pya, N. & Säfken, B. Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548–1563 (2016).
MathSciNet CAS Article Google Scholar
R: A language and environment for statistical computing. v. 3. 4.1 (R Foundation for Statistical Computing, Vienna, Austria, https://www.r-project.org/, 2017).
Phipps, S. J. et al. The CSIRO Mk3L climate system model version 1.0 - Part 1: Description and evaluation. Geoscientific Model Development 4, 483–509, https://doi.org/10.5194/gmd-4-483-2011 (2011).
Phipps, S. J. et al. The CSIRO Mk3L climate system model version 1.0 - Part 2: Response to external forcings. Geoscientific Model Development 5, 649–682, https://doi.org/10.5194/gmd-5-649-2012 (2012).
Berger, A. Long-term variations of daily insolation and Quaternary climatic changes. Journal of the atmospheric sciences 35, 2362–2367 (1978).
Phipps, S. J. & Brown, J. N. In IOP Conference Series: Earth and Environmental Science. 012010 (IOP Publishing).
Carré, M. et al. Holocene history of ENSO variance and asymmetry in the eastern tropical Pacific. Science 345, 1045–1048 (2014).
Emile-Geay, J. & Tingley, M. Inferring climate variability from nonlinear proxies: application to palaeo-ENSO studies. Climate of the Past 12, 31–50 (2016).
We acknowledge Minjerribah (North Stradbroke Island) and the surrounding waters as Quandamooka Country and would like to thank Aunty Marg, Minjerribah Moorgumpin elders, for support to undertake research on Country. The project was supported by the Australian Research Council (grants LP0990124 and DP150103875) and the Australian Institute of Nuclear Science Engineering (grants ALNGRA11005P and ALNGRA15524). JTO and JEC were supported by U.S. National Science Foundation EaSM2 Grant (AGS1243125). S.J.P. was supported under the Australian Research Council's Special Research Initiative for the Antarctic Gateway Partnership (Project ID SR140300001).
Department of Geography, Environment and Population, The University of Adelaide. North Terrace, Adelaide, South Australia, 5005, Australia
C. Barr & J. Tibby
Sprigg Geobiology Centre, The University of Adelaide. North Terrace, Adelaide, South Australia, 5005, Australia
C. Barr, J. Tibby & J. J. Tyler
School of Biosciences, Sutton Bonington Campus, University of Nottingham, Leicestershire, LE12 5RD, UK
M. J. Leng
Stable Isotope Facility, Centre for Environmental Geochemistry, British Geological Survey, Nottingham, NG12 5GG, UK
Department of Earth Sciences, The University of Adelaide. North Terrace, Adelaide, South Australia, 5005, Australia
J. J. Tyler
School of Geography, Politics and Sociology, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
A. C. G. Henderson
School for Environment & Sustainability, The University of Michigan, Ann Arbor, Michigan, 48109, USA
J. T. Overpeck
Institute of the Environmental Change and Society, University of Regina, Saskatchewan, S4S 0A2, Canada
G. L. Simpson
Department of Earth and Environmental Science, The University of Michigan, Ann Arbor, Michigan, 48109, USA
J. E. Cole
Institute for Marine and Antarctic Studies, University of Tasmania, Hobart, Tasmania, Australia
S. J. Phipps
Queensland Department of Environment and Science, Dutton Park, Queensland, 4102, Australia
J. C. Marshall & G. B. McGregor
Australian Nuclear Science and Technology Organisation. Locked Bag 2001, Kirrawee DC, New South Wales, 2232, Australia
Q. Hua
School of Mathematics and Statistics, University of Western Australia, Crawley, Western Australia, 6009, Australia
F. H. McRobie
C. Barr
J. Tibby
J. C. Marshall
G. B. McGregor
C.B., J.T., J.C.M. and G.B.M. conceived of the original project and undertook all fieldwork. C.B. managed the project, undertook laboratory analyses and led the manuscript development. M.J.L. oversaw all isotope analyses and interpretation of results. J.T. led the initial modern calibration component. C.B., J.T., J.J.T., A.C.G.H., J.T.O., J.E.C. and J.C.M. contributed equally to the palaeoclimate aspect of the study. J.J.T. and F.H.M. contributed early statistical analyses; G.L.S. undertook all final statistical analyses. S.J.P. undertook all analysis of the climate modelling experiments; Q.H. supervised radiocarbon analyses and undertook all age modelling. All authors contributed to editing and revision of the manuscript.
Correspondence to C. Barr.
The original online version of this Article was revised: The original version of this Article contained an error in Reference 32, which was incorrectly given as: Stott, L., Poulsen, C., Lund, S. & Thunell, R. Super ENSO and Global Climate Oscillations at Millennial Time Scales. Science 297, 222–226 (2002). The correct reference is: Stott, L., Cannariato, K., Thunell, R., Huag, G.H., Koutavas, A. & Lund, S. Decline of surface temperature and salinity in the western tropical Pacific Ocean in the Holocene epoch. Nature 431, 56–59 (2004).
Barr, C., Tibby, J., Leng, M.J. et al. Holocene El Niño–Southern Oscillation variability reflected in subtropical Australian precipitation. Sci Rep 9, 1627 (2019). https://doi.org/10.1038/s41598-019-38626-3
Enhanced North Pacific subtropical gyre circulation during the late Holocene
Yancheng Zhang
Xufeng Zheng
Zhonghui Liu
Nature Communications (2021)
A palaeoclimate proxy database for water security planning in Queensland Australia
Jacky Croke
John Vítkovský
Ramona Dalla Pozza
Scientific Data (2021)
Resilience to drought of dryland wetlands threatened by climate change
Steven G. Sandi
Jose F. Rodriguez
Patricia M. Saco
Wave Hindcasting and Anchoring Activities in Ancient Harbours: The Impact of Coastal Dynamics on Ancient Carthago Nova (Cartagena, Spain)
Felipe Cerezo-Andreo
Francisco J. López-Castejón
Javier Gilabert-Cervera
Journal of Maritime Archaeology (2020)
Top 100 in Earth Science | CommonCrawl |
We explore the top-K rank aggregation problem in which one aims to recover a consistent ordering that focuses on top-K ranked items based on partially revealed preference information. We examine an M-wise comparison model that builds on the Plackett-Luce (PL) model where for each sample, M items are ranked according to their perceived utilities modeled as noisy observations of their underlying true utilities. As our result, we characterize the minimax optimality on the sample size for top-K ranking. The optimal sample size turns out to be inversely proportional to M. We devise an algorithm that effectively converts M-wise samples into pairwise ones and employs a spectral method using the refined data. In demonstrating its optimality, we develop a novel technique for deriving tight $\ell_\infty$ estimation error bounds, which is key to accurately analyzing the performance of top-K ranking algorithms, but has been challenging. Recent work relied on an additional maximum-likelihood estimation (MLE) stage merged with a spectral method to attain good estimates in $\ell_\infty$ error to achieve the limit for the pairwise model. In contrast, although it is valid in slightly restricted regimes, our result demonstrates a spectral method alone to be sufficient for the general M-wise model. We run numerical experiments using synthetic data and confirm that the optimal sample size decreases at the rate of 1/M. Moreover, running our algorithm on real-world data, we find that its applicability extends to settings that may not fit the PL model. | CommonCrawl |
The role of absolute humidity in respiratory mortality in Guangzhou, a hot and wet city of South China
Shutian Chen1,
Chao Liu2,
Guozhen Lin3,
Otto Hänninen4,
Hang Dong3 &
Kairong Xiong ORCID: orcid.org/0000-0001-7439-651X1
Environmental Health and Preventive Medicine volume 26, Article number: 109 (2021) Cite this article
For the reason that many studies have been inconclusive on the effect of humidity on respiratory disease, we examined the association between absolute humidity and respiratory disease mortality and quantified the mortality burden due to non-optimal absolute humidity in Guangzhou, China.
Daily respiratory disease mortality including total 42,440 deaths from 1 February 2013 to 31 December 2018 and meteorological data of the same period in Guangzhou City were collected. The distributed lag non-linear model was used to determine the optimal absolute humidity of death and discuss their non-linear lagged effects. Attributable fraction and population attributable mortality were calculated based on the optimal absolute humidity, defined as the minimum mortality absolute humidity.
The association between absolute humidity and total respiratory disease mortality showed an M-shaped non-linear curve. In total, 21.57% (95% CI 14.20 ~ 27.75%) of respiratory disease mortality (9154 deaths) was attributable to non-optimum absolute humidity. The attributable fractions due to high absolute humidity were 13.49% (95% CI 9.56 ~ 16.98%), while mortality burden of low absolute humidity were 8.08% (95% CI 0.89 ~ 13.93%), respectively. Extreme dry and moist absolute humidity accounted for total respiratory disease mortality fraction of 0.87% (95% CI − 0.09 ~ 1.58%) and 0.91% (95% CI 0.25 ~ 1.39%), respectively. There was no significant gender and age difference in the burden of attributable risk due to absolute humidity.
Our study showed that both high and low absolute humidity are responsible for considerable respiratory disease mortality burden, the component attributed to the high absolute humidity effect is greater. Our results may have important implications for the development of public health measures to reduce respiratory disease mortality.
The health risks associated with respiratory diseases cause serious morbidity and mortality, especially in some developing countries [1, 2]. Changes in meteorological factors and air pollution factors caused by climate change are closely related to respiratory diseases, which will aggravate and even lead to death due to adverse climate conditions [3,4,5]. Epidemiological studies have shown that increased mortality from respiratory disease (RESP) is associated with short-term exposure to extreme temperatures [6,7,8,9]. However, most of the current studies on the impact of climate change on human health use average temperature, maximum (or minimum) air temperature, and heat index as the main research factors [10,11,12,13]. In the context of climate change, near surface air specific humidity has increased since the 1970s, but the changes in atmospheric water vapor content and precipitation rate accompanying climate change have obvious regional differences. In northern latitudes, both precipitation and atmospheric water content may increase substantially [14, 15].
Humidity, as one of the meteorological factors, is not the first choice of researchers, and is often included in the confounding effects of relative humidity on the health effects of respiratory diseases [16, 17]. Studies have shown that high and low relative humidity are associated with an increased risk of influenza, respectively [18]. However, relative humidity as a single humidity variable is often inappropriate in the context of environmental health or epidemiology, because relative humidity is a function of changes in water vapor and air temperature, and relative humidity does not necessarily reflect the true moisture content of air, so relative humidity may not be directly related to health outcomes [16], which might be the reason why some environmental epidemiological studies have not been able to explain the effect of relative humidity on health outcomes well [19, 20]. Some studies have shown that absolute humidity plays an important role in the spread and survival of influenza viruses [21, 22], while other studies have shown that cold temperatures and humidity are associated with an increase in respiratory tract infections [23]. Existing research shows that there is a positive correlation between absolute humidity and influenza events in subtropical and tropical regions, while there is a negative correlation between absolute humidity and influenza events in temperate regions [24], but some studies point out that humidity is not related to respiratory diseases [25]. Currently, there are few studies on absolute humidity as the main variable and the relationship between it and respiratory diseases. Absolute humidity is defined as the absolute water quality per unit volume of air. To fill in the gaps in the field, we use absolute humidity as the main research variable to estimate the association between absolute humidity and death from respiratory disease.
This research takes Guangzhou as the research object. As a subtropical city in China, Guangzhou, has relatively humid air every year with obvious seasonal changes. With the changes in humidity brought about by climate change, there is not enough evidence to show the impact of this changing environment on the risk of death in the respiratory system. In this study, the daily time series data of Guangzhou from 2013 to 2018 were selected to establish a distributed lag non-linear model (DLNM) using R software to estimate the influence of absolute humidity on respiratory diseases in Guangzhou population. In addition, this study further analyzed deaths from influenza and pneumonia (I&P) and chronic lower respiratory disease (CLRD), two major disease types, based on data from the 10th revision of the International Classification of Diseases (ICD10). Chronic lower respiratory diseases mainly include chronic obstructive pulmonary disease, asthma, and occupational pulmonary disease. Finally, the burden of death caused by non-optimal absolute humidity, i.e., the number of deaths and the proportion of deaths caused by non-optimal absolute humidity, were quantified. The results of this study can provide reference for local health authorities to take measures to reduce the mortality from respiratory diseases in the population.
This study selected Guangzhou, China as the research area. Guangzhou (east longitude: 112°57′ to 114°3′, north latitude: 22°26′ to 23°56′), is the largest city in South China, located in the southeast of China, with a population of about 14.9 million and a population density of 1708 people/km2. Guangzhou is mild in winter and hot in summer, with high humidity and obvious changes throughout the year.
Respiratory disease mortality data
According to the 10th revision of the International Classification of Diseases (ICD10), daily data of deaths from respiratory disease (ICD10: J00-J99) in Guangzhou from 1 February 2013 to 31 December 2018 were obtained from the Guangzhou Center for Disease Control and Prevention, and influenza and pneumonia (ICD10:J09-J18) and chronic lower respiratory disease (ICD10: J40-J47) were further analyzed. We also stratified daily mortality by gender and age group (0–64 years, 65 years or older).
Daily meteorological data was collected from five meteorological stations (Conghua, Huadu, Zengcheng, Guangzhou, Panyu) in Guangzhou (Fig. 1). The meteorological data collected include temperature (°C), relative humidity (%), precipitation (mm), atmospheric pressure (kPa), and wind speed (m/s). The average value of the meteorological data from the five meteorological stations represents the temperature, relative humidity, precipitation, atmospheric pressure, and wind speed in Guangzhou. The meteorological data are from the China Meteorological Data Sharing Service System (http://data.cma.cn/) and are authoritative and reliable. Absolute humidity is calculated from the collected temperature and relative humidity data, and the calculation formula is as follows [26]:
$$\mathrm{AH}\left(\mathrm{g}/{m}^3\right)=\frac{6.112\times {e}^{\left[\left(17.67\times T\right)\times \left(T+243.5\right)\right]}\times \mathrm{RH}\times 2.1674}{\left(273.15+T\right)}$$
The location of study area and the position of meteorological stations distribution of Guangzhou, China
Air pollution data
Air pollutants may alter the relationship between meteorological factors and mortality [27]. This study collected daily average air pollution data in Guangzhou, including PM2.5, NO2, SO2, and O3, While O3 is the maximum 8-h average of daily ozone concentration. The data came from the China's National City Air Quality real-time publishing platform (http://106.37.208.233:20035/) managed by the China's National Environmental Monitoring Center.
A time series database of respiratory diseases, meteorological factors, and air pollution factors was established, and DLNM was used to examine the relationship between absolute humidity and death from respiratory diseases. DLNM is a regression model based on cross basis functions to study the exposure response relationship, and at the same time, it also takes into account the lag response of exposure-response factors and the non-linearity of the exposure-response relationship [28]. Due to its flexibility in use, DLNM has been widely used to study the effects of meteorological and air pollution factors on human health [8, 29]. Before constructing DLNM model, Spearman rank correlation was conducted to analyze the correlation degree between daily death from respiratory diseases and meteorological and air pollution variables in Guangzhou, so as to exclude some variables that had no substantial influence on the model relationship (see Table S1). In this study, a DLNM model of quasi-Poisson distribution fitting was built, and the core model is as follows:
$$\log \left[E\left({Y}_t\right)\right]=\alpha + cb\left({\mathrm{AH}}_t,\mathrm{lag}\right)+\mathrm{ns}\left({\mathrm{Time}}_t,\mathrm{df}=7\ast 6\right)+\sum \mathrm{ns}\left({\mathrm{Meteorology}}_t,\mathrm{df}=3\right)+\sum \mathrm{ns}\left({\mathrm{Pollution}}_t,\mathrm{df}=3\right)+{\beta}_1{\mathrm{Dow}}_t+{\beta}_2{\mathrm{Holiday}}_t+\varepsilon$$
Where t is the day of observation; E(Yt) represents expected daily mortality from RESP; α is the intercept; cb refers to the cross basis function obtained by applying DLNM to absolute humidity; lag represents the days of lag; ns represents the function of natural spline; df means degrees of freedom; 7 df per year were selected to control trends and seasonality over long periods of time according to the minimum Akaike Information Criterion (AIC) value; Meteorology variables include mean precipitation and mean atmospheric pressure, with 3 df to control their trends; Pollution variables include PM2.5, NO2, SO2, and O3, which also use 3 df to control their confounding effects; day of week (Dow) and public holiday (Holiday) are included in the model as categorical variables; β1, β2 are the coefficient; ε is error term. Natural cubic spline function was used for exposure-response's dimension fitting, the knot locations are set in the 10th, 75th, and 90th percentiles of absolute humidity [11]. Natural cubic spline function was also used for exposure-lag's dimension fitting, knots were placed at equal distances in logarithmic scale. All the parameters in DLNM are set according to the minimum model AIC value criterion. Based on previous studies and the calculations of this model, the impact of meteorological factors on human health often lasts for several weeks [30, 31], Moreover, laboratory studies in guinea pig models have shown that low levels of absolute humidity promote the survival and transmission of influenza viruses, these effects last for about a month [16, 32]. So the maximum lag days were set as 35 days, which is sufficient to estimate all possible short-term health effects of absolute humidity.
Then, the relative risk (RR) of death from respiratory diseases due to absolute humidity was analyzed. DLNM could obtain the minimum mortality absolute humidity (MMAH) and the corresponding minimum mortality absolute humidity percentile (MMAHP) by the cumulative exposure-response relationship between absolute humidity and the number of deaths obtained by the best linear unbiased prediction of all results. Taking MMAH as baseline absolute humidity and adding the effect values of all days in DLNM, the total population attributable mortality (PAM) due to respiratory diseases caused by non-optimal absolute humidity can be calculated, and the corresponding attributable fraction (AF) can be calculated from the percentage of the population attributable mortality to the total mortality [11]. The attributable effect caused by the absolute humidity lower than the minimum absolute humidity is defined as the low absolute humidity effect, and vice versa is defined as the high absolute humidity effect. The empirical confidence intervals (CIs) of the calculation of attributive risk are obtained from the estimated empirical Intervals obtained by Monte Carlo simulation [33]. Finally, conditions below the 2.5% of absolute humidity were defined as extremely dry, and those above the 97.5% were defined as extremely moist. In this study, the attributive burden of extreme dry and extreme moist was additionally discussed.
Sensitivity analysis
Sensitivity analysis was conducted in this study to verify the stability of the model. Therefore, we changed the degree of freedom of natural spline function of meteorological factors and air pollution in the model from 3 to 5 successively in order to control their confounding influence. The degree of freedom of the temporal natural spline function in the model is changed from 6 to 8 successively in order to control the trend of time [8, 34]. In the dimension of expose-response maintenance and expose-lag, the location and number of knots fitted by absolute humidity and lag parameters are changed by changing degree of freedom [11].
All the statistical analysis in this study was carried out in R 3.5.2 software, and the "dlnm" software package was used to build the DLNM. All statistical tests use two-tailed test, P < 0.05 is considered statistically significant.
Descriptive statistics of daily respiratory disease mortality and meteorological condition from 2013 to 2018 are shown in Table 1. A total of 42,440 cases of respiratory disease mortality were included in this study. Among them, 8.35% (3544/42,440) of mortality were between 0 and 64 years old, 91.65% (38,896/42,440) were 65 years old and above, 60.12% (25,516/42,440) were males, and 39.88% (16,924/42,440) were females. The types of mortality from respiratory diseases are further classified. There were 17,116 cases from influenza and pneumonia mortality, of which 10.20% (1746/17,116) died from 0 to 64 years old, 89.80% (15,370/17,116) were 65 years old and above, 53.02% were males (9075/17,116), and 46.98% were females (8041/17,116). And there were 19,926 cases from chronic lower respiratory disease mortality, of which 5.93% (1,181/19,926) were died from 0 to 64 years old, 94.07% (18,745/19,926) were 65 years old and above, 68.24% were males (13,597/19,926), and 31.76% were females (6329/19,926). Mean absolute humidity, mean temperature, mean relative humidity, mean precipitation, mean atmospheric pressure and mean wind speed were 16.88 g/m3, 22.54 °C, 80.29%, 5.78 mm, 100.57 kPa and 2.67 m/s, respectively.
Table 1 Summary statistics of the daily respiratory disease mortality and meteorological condition in Guangzhou, China between 1 February 2013 and 31 December 2018
Mortality from respiratory diseases usually peak in winter, with an occasional small peak in summer, showing seasonal changes. Absolute humidity and mean temperature peak in summer, and atmospheric pressure peak in winter, both of which show seasonal changes. The relative humidity and wind speed fluctuated throughout the year, but the seasonal changes were not obvious (Fig. 2).
The time series distributions of daily respiratory disease mortality and meteorological condition in Guangzhou, China between 1 February 2013 and 31 December 2018. RESP, respiratory disease; I&P, influenza and pneumonia; CLRD, chronic lower respiratory disease; AH, absolute humidity; Tmean, mean temperature; RHmean, mean relative humidity; PREmean, mean precipitation; Pressure, mean atmospheric pressure; WS, mean wind speed
Spearman correlation analysis showed that mortality from respiratory disease was negatively correlated with absolute humidity (r = − 0.208, P < 0.01), temperature (r = − 0.241, P < 0.01), and precipitation (r = − 0.078, P < 0.01) and was positively correlated with atmospheric pressure (r = 0.142, P < 0.01), but has no significant correlation with relative humidity and wind speed (Table S1). This study mainly studies the influence of absolute humidity change on respiratory disease mortality, and also compares absolute humidity model with temperature model. Except absolute humidity and temperature, other meteorological variables and air pollution variables with significant correlations were included in the model to account for their confounding effects. At the same time, since the absolute humidity is a function of temperature and relative humidity, and the absolute humidity and temperature are highly correlated (r = 0.935, p < 0.01), the average temperature is not included in the main model to avoid double calculation and collinearity. Instead, a sensitivity analysis of the influence of temperature on absolute humidity-respiratory disease death is done separately.
Risk analysis of lag and absolute humidity
The results of the DLNM model show that the relative risk (RR) of deaths from respiratory diseases presents a nonlinear trend in different lag days and absolute humidity, as shown in Fig. 3. While Fig. 4 shows the cumulative effect of absolute humidity on mortality from respiratory diseases. The relationship between absolute humidity and the relative risk of mortality from respiratory diseases showed an overall "M" type trend, with a minimum mortality absolute humidity of 16.94 g/m3 (47.5%). The relationship between absolute humidity and the relative risk of mortality from influenza and pneumonia presents a U-shaped curve with the minimum mortality absolute humidity of 14.58 g/m3 (36.4%), the relative risk increases first and then decreases at high humidity (see Fig. S1). The relationship between absolute humidity and the relative risk of mortality from chronic lower respiratory disease showed a M-shaped curve, with the minimum mortality absolute humidity of 18.25 g/m3 (52.7%) (see Fig. S2).
Three-dimensional plot and contour plot of the relationship between daily absolute humidity and cardiovascular diseases mortality
Overall cumulative relative risks (RRs) of deaths from respiratory diseases across lag 0–35 days (with 95% CI, shaded grey) in Guangzhou and daily mean absolute humidity distribution. The blue line shows low absolute humidity effect and the red line shows high absolute humidity effect. The middle shallow dotted line is minimum mortality absolute humidity (MMAH), and the dotted lines on the left and right represent the 2.5 and 97.5% of absolute humidity, respectively
Table 2 shows the total attributable fraction and population attributable mortality due to absolute humidity for respiratory disease mortality in our study. Overall, 21.57% (95% CI 14.20 ~ 27.75%) of respiratory disease mortality (9154 deaths) was attributable to non-optimum absolute humidity. The attributable fractions due to high absolute humidity were 13.49% (95% CI 9.56 ~ 16.98%), while mortality burden of low absolute humidity were 8.08% (95% CI 0.89 ~ 13.93%), respectively. In total, extreme dry and moist absolute humidity accounted for total respiratory disease mortality fraction of 0.87% (95% CI − 0.09 ~ 1.58%) and 0.91% (95% CI 0.25 ~ 1.39%), respectively.
Table 2 AF and PAM of absolute humidity for respiratory disease mortality in Guangzhou, China, 2013–2018
The attributed burden of influenza and pneumonia mortality caused by non-minimum mortality absolute humidity was 21.88% (95% CI 10.35 ~ 29.45%), among which the high absolute humidity was 15.92% (95% CI 4.25 ~ 23.99%) and the low absolute humidity was 5.97% (95% CI − 1.11 ~ 11.56%) (see Table 3). The attributed burden of chronic lower respiratory disease mortality caused by non-minimum mortality absolute humidity was 23.41% (95% CI 11.62 ~ 33.13%), of which high absolute humidity was 10.06% (95% CI 4.29 ~ 14.59%), and low absolute humidity was 13.35% (95% CI 1.70 ~ 21.69%) (see Table 4). Different from the former two, low absolute humidity caused a greater attributable burden of chronic lower respiratory diseases mortality. The attributable burden of mortality from respiratory disease caused by absolute humidity is not significantly different in age group and gender, but within the scope of this study. However, the population attributable mortality for male (5510) is greater than that for female (3652), which may be mainly due to the difference in the population attributable mortality from chronic lower respiratory diseases (3175 for male and 1489 for female).
Table 3 AF and PAM of absolute humidity for influenza and pneumonia mortality in Guangzhou, China, 2013–2018
Table 4 AF and PAM of absolute humidity for chronic lower respiratory disease mortality in Guangzhou, China, 2013–2018
In the sensitivity analysis, the residuals of model for respiratory disease mortality were approximately normally distributed and independent over time in the model (see Fig. S3). The results of the DLNM are relatively stable when changing the degree of freedom of the time variables, meteorological variables, and air pollution variables (see Figs. S4, S5, and Table S2). In the dimension of expose-response and expose-lag, the result is still robust after changing the degree of freedom of absolute humidity and lag parameter fitting (see Table S2).
Incorporating the temperature into the model and setting the lag days as 0 days, 0–7 days, and 0–14 days, respectively, the results are still relatively stable (see Table S3). Sensitivity analysis shows that the results of the absolute humidity model in this study are stable and reliable.
Finally, this study also compares the absolute humidity model with the temperature model. The setting parameters of the temperature model are basically the same as the absolute humidity model, and the lag days are also set at 35 days. It was found that the absolute humidity model had a slightly higher attributable fraction for mortality from respiratory disease than the temperature model (see Fig. S7 and Table S4).
Most previous studies have studied the effect of temperature on death from respiratory diseases. This study aims to estimate the effect of absolute humidity on death from respiratory disease. A review of previous studies suggests that although relative humidity is the humidity variable most commonly used, it should be used with caution and should be avoided when near saturation is not medically relevant [16]. In this study, DLNM was used to analyze the influence of absolute humidity on the mortality burden of respiratory disease in the population. The results showed that there was a correlation between death from respiratory diseases and dry or moist environment in Guangzhou. More than one fifth of the deaths, or 9154 (95% CI 6026 to 11776), can be attributed to the burden of death due to absolute humidity.
The overall exposure-response relationship between absolute humidity and respiratory diseases shows an "M"-shaped nonlinear trend. The appearance of this "M" trend is probably caused by the lack of sufficient data of extremely low or extremely high absolute humidity. The small sample size leads to relatively large uncertainty, which leads to the low significance of confidence interval under extreme humidity in our study. In temperate countries, influenza outbreaks have been found to be closely correlated with seasonal variations in temperature and absolute humidity, and there is a hypothetical "U" shaped relationship between absolute humidity and influenza, but it has not been fully validated in subtropical and tropical regions [35], and the results are generally consistent with this study.
Till now, the mechanism behind the association between absolute humidity and respiratory diseases mortality is still lacking full understanding. Under low ambient humidity, the stability of influenza virus in aerosol can be improved. High humidity may produce droplets that bind to the influenza virus, increasing the virus concentration in the air around the source of infection [36]. Laboratory studies in guinea pig models have shown that low levels of absolute humidity promote the survival and transmission of influenza viruses, these effects last for about a month [30, 32].
Some studies have pointed out that there is a positive correlation between absolute humidity and influenza events in subtropical and tropical regions, while there is a negative correlation between absolute humidity and influenza events in temperate regions [24], which is different from the results of this study. Further research is needed to investigate the mechanism and association between influenza and pneumonia-related deaths and absolute humidity. Studies have shown that high deposition rates of aerosols inhaled in hot and humid environments may indicate that individuals face higher health risks than normal environmental conditions and that patients are more likely to develop respiratory symptoms [37]. A recent study also suggests that high humidity and heat in the air favor the deposition of submicron aerosols and infectious aerosols in the respiratory tract, which may be associated with an increase in respiratory infections, asthma, and chronic obstructive pulmonary disease [38]. Studies have shown that the effect of total suspended particle on hospitalization for COPD may be increased under low humidity conditions [39]. A study in Taiwan found that low humidity was associated with exacerbation and increase of chronic obstructive pulmonary disease [40], while no association was found between humidity and exacerbation of chronic obstructive pulmonary disease in Istanbul [25]. Notably, humidity can indirectly influence abnormal morbidity and mortality by influencing heat stress and hydration status [41]. When the body overheats, the skin surface transfers heat to the surrounding environment through convection, long-wave radiation exchange, and evaporation of water on the skin surface. High humidity weakens the epidermal-atmospheric moisture gradient, thus impeding evaporation and heat dissipation on the skin surface, leading to insufficient cooling of the body. In severe cases, it may develop into symptoms such as heat syncope, heat cramps, and heat exhaustion leading to death, while low humidity can lead to dehydration and aggravate existing diseases [16]. In summary, a growing body of evidence suggests that when appropriate humidity variables are selected, humidity is associated with abnormal changes in respiratory disease.
In addition, the present study also compared the temperature model with the absolute humidity model, and found that the absolute humidity model resulted in a slightly higher attribution fraction for respiratory disease deaths than the temperature model (See Table S6). The non-optimal absolute humidity account for the overall attributable fraction of 21.57% in total respiratory disease mortality in Guangzhou, which is higher than the estimate of non-optimal temperature on respiratory disease mortality in our study (AF = 18.40%). So, absolute humidity may be a more sensitive exposure indicator for the mortality burden of respiratory diseases than temperature. The emergence of this phenomenon needs further study. We also found that extreme low and high absolute humidity resulted in much lower attributable fractions than moderate low and high AH, merely because they accounted for less days.
In this study, the daily mortality data of respiratory diseases in Guangzhou from 2013 to 2018 were used to analyze the influence of absolute humidity on short-term exposure, and two major mortality types of diseases, influenza and pneumonia and chronic lower respiratory diseases, were further analyzed. Meanwhile, stratified analysis was conducted by age and gender. It is helpful to understand the attributed burden of absolute humidity in different groups. However, this study has some limitations. First of all, the meteorological data in the study are from meteorological stations, which cannot accurately represent the individual exposure data, and there is a certain bias. Then, the biological mechanism of absolute humidity affecting respiratory diseases needs to be further studied and discussed, and more possible influencing factors should be included into the research model for a more comprehensive analysis in subsequent studies. Finally, this study is an ecological study and further exploration of interpretation of the results at the individual level is needed, so caution should be exercised in inferred causality between absolute humidity exposure and death from respiratory disease.
In conclusion, both high and low absolute humidity are responsible for considerable respiratory disease mortality burden. Local decision makers and communities should raise the awareness of preventing the harmful effects of excessively dry or humid environment and take relevant protective measures, which will have certain positive significance in reducing the death from respiratory diseases.
The data supporting this study came from Guangzhou Center for Disease Control and Prevention, China, but the data were used under license in this study, so cannot be made public. However, data can be obtained from the corresponding author with the permission of the Guangzhou Center for Disease Control and Prevention, China.
RESP:
Respiratory disease
DLNM:
Distributed lag non-linear model
I&P:
Influenza and pneumonia
CLRD:
Chronic lower respiratory disease
ICD10:
The 10th revision of the International Classification of Diseases
AH:
Absolute humidity
MMAH:
Minimum mortality absolute humidity
MMAHP:
Minimum mortality absolute humidity percentile
PAM:
Population attributable mortality
AF:
Attributable fraction
CI:
Xu G, Chen L, Chen Y, Wang T, Shen FH, Wang K, et al. Impact of heatwaves and cold spells on the morbidity of respiratory diseases: a case study in Lanzhou, China. Phys Chem Earth Parts A/B/C. 2019;115:102825.
Ma Y, Zhou J, Sixu Y, Yu Z, Wang F, Zhou J. Effects of extreme temperatures on hospital emergency room visits for respiratory diseases in Beijing, China. Environ Sci Pollut Res. 2019;26(3):3055–64.
Demain JG. Climate Change and the Impact on Respiratory and Allergic Disease: 2018. Curr Allergy Asthma Rep. 2018;18(4):22.
Joshi M, Goraya H, Joshi A, Bartter T. Climate change and respiratory diseases: a 2020 perspective. Curr Opin Pulmon Med. 2019;26(2):119–27.
Barnes CS. Impact of Climate Change on Pollen and Respiratory Disease. Curr Allergy Asthma Rep. 2018;18(11):59.
Dong S, Wang C, Han Z, Wang Q. Projecting impacts of temperature and population changes on respiratory disease mortality in Yancheng. Phys Chem Earth Parts A/B/C. 2020;117:102867.
Ha J, Shin Y, Kim H. Distributed lag effects in the relationship between temperature and mortality in three major cities in South Korea. Sci Total Environ. 2011;409(18):3274–80.
Lin Q, Lin H, Liu T, Lin Z, Lawrence WR, Zeng W, et al. The effects of excess degree-hours on mortality in Guangzhou, China. Environ Res. 2019;176:108510.
Ma W, Zeng W, Zhou M, Wang L, Rutherford S, Lin H, et al. The short-term effect of heat waves on mortality and its modifiers in China: An analysis from 66 communities. Environ Int. 2015;75:103–9.
Yang J, Yin P, Zhou M, Ou CQ, Guo Y, Gasparrini A, et al. Cardiovascular mortality risk attributable to ambient temperature in China. Heart. 2015;101(24):1966–72.
Gasparrini A, Guo Y, Hashizume M, Lavigne E, Zanobetti A, Schwartz J, et al. Mortality risk attributable to high and low ambient temperature: a multicounty observational study. Lancet. 2015;386(9991):369–75.
Fallah Ghalhari G, Mayvaneh F. Effect of air temperature and universal thermal climate index on respiratory diseases mortality in Mashhad, Iran. Arch Iran Med. 2016;19(9):618–24.
Zhao Q, Zhao Y, Li S, Zhang Y, Wang Q, Zhang H, et al. Impact of ambient temperature on clinical visits for cardio-respiratory diseases in rural villages in northwest China. Sci Total Environ. 2018;612:379–85.
Held I, Soden B. Water Vapor Feedback and Global Warming. Annu Rev Energy Environ. 2000;25(1):441–75.
Sherwood SC, Meyer CL. The General Circulation and Robust Relative Humidity. Journal of Climate. 2006;19(24):6278–90. https://journals.ametsoc.org/view/journals/clim/19/24/jcli3979.1.xml.
Davis RE, McGregor GR, Enfield KB. Humidity: a review and primer on atmospheric moisture and human health. Environ Res. 2016;144(Pt A):106–16.
Mendell MJ, Mirer AG, Cheung K, Tong M, Douwes J. Respiratory and allergic health effects of dampness, mold, and dampness-related agents: a review of the epidemiologic evidence. Environ Health Perspect. 2011;119(6):748–56.
Wu Q, He J, Zhang WY, Zhao KF, Jin J, Yu JL, et al. The contrasting relationships of relative humidity with influenza A and B in a humid subtropical region. Environ Sci Pollut Res. 2021;28(27):36828–36.
Goggins WB, Woo J, Ho S, Chan EY, Chau PH. Weather, season, and daily stroke admissions in Hong Kong. Int J Biometeorol. 2012;56(5):865–72.
Schwartz J, Samet JM, Patz JA. Hospital admissions for heart disease: the effects of temperature and humidity. Epidemiology. 2004;15(6):755–61.
Barreca AI, Shimshack JP. Absolute humidity, temperature, and influenza mortality: 30 years of county-level evidence from the United States. Am J Epidemiol. 2012;176(Suppl 7):S114–22.
Thai PQ, Choisy M, Duong TN, Thiem VD, Yen NT, Hien NT, et al. Seasonality of absolute humidity explains seasonality of influenza-like illness in Vietnam. Epidemics. 2015;13:65–73.
Mäkinen TM, Juvonen R, Jokelainen J, Harju TH, Peitso A, Bloigu A, et al. Cold temperature and low humidity are associated with increased occurrence of respiratory tract infections. Respir Med. 2009;103(3):456–62.
Chong KC, Lee TC, Bialasiewicz S, Chen J, Smith DW, Choy WSC, et al. Association between meteorological variations and activities of influenza A and B across different climate zones: a multi-region modelling analysis across the globe. J Infect. 2020;80(1):84–98.
Hapcioglu B, Issever H, Koçyiğit E, Disci R, Vatansever S, Ozdilli K. The effect of air pollution and meteorological parameters on chronic obstructive pulmonary disease at an istanbul hospital. Indoor Built Environ. 2006;15(2):147–53.
Peci A, Winter AL, Li Y, Gnaneshan S, Liu J, Mubareka S, et al. Effect of absolute and relative humidity, temperature and wind speed on influenza activity in Toronto, Canada. Appl Environ Microbiol. 2019;85(6):e02426–18.
Anderson BG, Bell ML. Weather-related mortality: how heat, cold, and heat waves affect mortality in the United States. Epidemiology. 2009;20(2):205–13.
Gasparrini A, Armstrong B, Kenward MG. Distributed lag non-linear models. Stat Med. 2010;29(21):2224–34.
Li X, Zhou M, Yu M, Xu Y, Li J, Xiao Y, et al. Life loss per death of respiratory disease attributable to non-optimal temperature: results from a national study in 364 Chinese locations. Environ Res Lett. 2021;16(3):035001.
Shaman J, Pitzer V, Viboud C, Lipsitch M, Grenfell B. Absolute Humidity and the Seasonal Onset of Influenza in the Continental US. PLoS Curr. 2009;2:RRN1138.
Zeng J, Zhang X, Yang J, Bao J, Xiang H, Dear K, et al. Humidity may modify the relationship between temperature and cardiovascular mortality in Zhejiang Province, China. Int J Environ Res Public Health. 2017;14(11):1383.
Shaman J, Kohn M. Absolute humidity modulates influenza survival, transmission, and seasonality. Proc Natl Acad Sci U S A. 2009;106(9):3243–8.
Gasparrini A, Leone M. Attributable risk from distributed lag models. BMC Med Res Methodol. 2014;14:55.
Dimitrova A, Ingole V, Basagaña X, Ranzani O, Milà C, Ballester J, et al. Association between ambient temperature and heat waves with mortality in South Asia: Systematic review and meta-analysis. Environ Int. 2021;146:106170.
Deyle ER, Maher MC, Hernandez RD, Basu S, Sugihara G. Global environmental drivers of influenza. Proc Natl Acad Sci U S A. 2016;113(46):13081–6.
Xie X, Li Y, Chwang AT, Ho PL, Seto WH. How far droplets can move in indoor environments--revisiting the Wells evaporation-falling curve. Indoor Air. 2007;17(3):211–25.
Xi J, Si X, Kim J. Chapter 5. Characterizing respiratory airflow and aerosol condensational growth in children and adults using an imaging-CFD approach. Heat Transfer and Fluid Flow in Biological Processes; 2015. p. 125–55.
Ishmatov A. Influence of weather and seasonal variations in temperature and humidity on supersaturation and enhanced deposition of submicron aerosols in the human respiratory tract. Atmos Environ. 2020;223:117226.
Leitte AM, Petrescu C, Franck U, Richter M, Suciu O, Ionovici R, et al. Respiratory health, effects of ambient air pollution and its modification by air humidity in Drobeta-Turnu Severin, Romania. Sci Total Environ. 2009;407(13):4004–11.
Tseng CM, Chen YT, Ou SM, Hsiao YH, Li SY, Wang SJ, et al. The effect of cold temperature on increased exacerbation of chronic obstructive pulmonary disease: a nationwide study. PLoS One. 2013;8(3):e57066.
Parsons K. Human Thermal Environments: The effects of hot, moderate and cold environments on human health, Comfort and performance; 1993.
This work is supported by the National Social Science Foundation of China (Grant No. 17BXW104).
School of Environmental Science and Engineering, Guangdong University of Technology, Guangzhou, 510006, China
Shutian Chen & Kairong Xiong
School of Journalism & Communication, Guangdong University of Foreign Studies, Guangzhou, 510006, China
Chao Liu
Guangzhou Center for Disease Control and Prevention, Guangzhou, 510440, China
Guozhen Lin & Hang Dong
Department Public Health Solutions, National Institute for Health and Welfare, 00300, Helsinki, Finland
Otto Hänninen
Shutian Chen
Guozhen Lin
Hang Dong
Kairong Xiong
Methodology, investigation, and writing—original draft, CS. Conceptualization, writing—review and editing, funding acquisition, XK. Investigation and data curation, DH. Conceptualization, supervision, LC. Data curation, LG. Writing—review and editing, HO. All authors have read and agreed to the published version of the manuscript.
Correspondence to Kairong Xiong.
Spearman correlation analysis of respiratory diseases mortality, meteorological factors and air pollutant in Guangzhou, 2013-2018. Figure S1. Overall cumulative relative risks (RRs) of deaths from influenza and pneumonia across lag 0-35 days (with 95% CI, shaded grey) in Guangzhou and daily mean absolute humidity distribution. Figure S2. Overall cumulative relative risks (RRs) of deaths from chronic lower respiratory disease across lag 0-35 days (with 95% CI, shaded grey) in Guangzhou and daily mean absolute humidity distribution. Figure S3. The residual variation scatter plots over time for main model in daily RESP deaths in Guangzhou. Figure S4. Sensitivity analyses of overall cumulative relative risks (RRs) of respiratory disease mortality due to absolute humidity by changing degrees of freedom (6 to 8) for time variables. Figure S5. Sensitivity analyses of overall cumulative relative risks (RRs) of respiratory disease mortality due to absolute humidity by changing degrees of freedom (3 to 5) for meteorological variables and air pollution variables. Figure S6. Sensitivity analyses of overall cumulative relative risks (RRs) of respiratory disease mortality due to absolute humidity by changing the lag parameters of the included temperature. Table S2. Sensitivity analysis results on the effects of df/parameter in DLNM on the associations between absolute humidity and respiratory diseases mortality burden. Table S3. Sensitivity analysis results on the effects after controlling for temperature at lag 0, 0-7 and 0-14 days in the model. Figure S7. Overall cumulative relative risks (RRs) of deaths from respiratory diseases across lag 0-35 days (with 95% CI, shaded grey) in Guangzhou and daily mean temperature distribution. Table S4. Comparison of respiratory diseases mortality burden due to temperature models versus absolute humidity models.
Chen, S., Liu, C., Lin, G. et al. The role of absolute humidity in respiratory mortality in Guangzhou, a hot and wet city of South China. Environ Health Prev Med 26, 109 (2021). https://doi.org/10.1186/s12199-021-01030-3
Disease burden | CommonCrawl |
Space complexity
The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely.[1] This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.
Similar to time complexity, space complexity is often expressed asymptotically in big O notation, such as $O(n),$ $O(n\log n),$ $O(n^{\alpha }),$ $O(2^{n}),$ etc., where n is a characteristic of the input influencing space complexity.
Space complexity classes
Analogously to time complexity classes DTIME(f(n)) and NTIME(f(n)), the complexity classes DSPACE(f(n)) and NSPACE(f(n)) are the sets of languages that are decidable by deterministic (respectively, non-deterministic) Turing machines that use $O(f(n))$ space. The complexity classes PSPACE and NPSPACE allow $f$ to be any polynomial, analogously to P and NP. That is,
${\mathsf {PSPACE}}=\bigcup _{c\in \mathbb {Z} ^{+}}{\mathsf {DSPACE}}(n^{c})$
and
${\mathsf {NPSPACE}}=\bigcup _{c\in \mathbb {Z} ^{+}}{\mathsf {NSPACE}}(n^{c})$
Relationships between classes
The space hierarchy theorem states that, for all space-constructible functions $f(n),$ there exists a problem that can be solved by a machine with $f(n)$ memory space, but cannot be solved by a machine with asymptotically less than $f(n)$ space.
The following containments between complexity classes hold.[2]
${\mathsf {DTIME}}(f(n))\subseteq {\mathsf {DSPACE}}(f(n))\subseteq {\mathsf {NSPACE}}(f(n))\subseteq {\mathsf {DTIME}}\left(2^{O(f(n))}\right)$
Furthermore, Savitch's theorem gives the reverse containment that if $f\in \Omega (\log(n)),$
${\mathsf {NSPACE}}(f(n))\subseteq {\mathsf {DSPACE}}\left((f(n))^{2}\right).$
As a direct corollary, ${\mathsf {PSPACE}}={\mathsf {NPSPACE}}.$ This result is surprising because it suggests that non-determinism can reduce the space necessary to solve a problem only by a small amount. In contrast, the exponential time hypothesis conjectures that for time complexity, there can be an exponential gap between deterministic and non-deterministic complexity.
The Immerman–Szelepcsényi theorem states that, again for $f\in \Omega (\log(n)),$ ${\mathsf {NSPACE}}(f(n))$ is closed under complementation. This shows another qualitative difference between time and space complexity classes, as nondeterministic time complexity classes are not believed to be closed under complementation; for instance, it is conjectured that NP ≠ co-NP.[3][4]
LOGSPACE
L or LOGSPACE is the set of problems that can be solved by a deterministic Turing machine using only $O(\log n)$ memory space with regards to input size. Even a single counter that can index the entire $n$-bit input requires $\log n$ space, so LOGSPACE algorithms can maintain only a constant number of counters or other variables of similar bit complexity.
LOGSPACE and other sub-linear space complexity is useful when processing large data that cannot fit into a computer's RAM. They are related to Streaming algorithms, but only restrict how much memory can be used, while streaming algorithms have further constraints on how the input is fed into the algorithm. This class also sees use in the field of pseudorandomness and derandomization, where researchers consider the open problem of whether L = RL.[5][6]
The corresponding nondeterministic space complexity class is NL.
Auxiliary space complexity
The term auxiliary space refers to space other than that consumed by the input. Auxiliary space complexity could be formally defined in terms of a Turing machine with a separate input tape which cannot be written to, only read, and a conventional working tape which can be written to. The auxiliary space complexity is then defined (and analyzed) via the working tape. For example, consider the depth-first search of a balanced binary tree with $n$ nodes: its auxiliary space complexity is $\Theta (\log n).$
See also
• Analysis of algorithms – Study of resources used by an algorithm
• Computational complexity theory – Inherent difficulty of computational problems
• Computational resource – Something a computer needs needed to solve a problem, such as processing steps or memory
References
1. Kuo, Way; Zuo, Ming J. (2003), Optimal Reliability Modeling: Principles and Applications, John Wiley & Sons, p. 62, ISBN 9780471275459
2. Arora, Sanjeev; Barak, Boaz (2007), Computational Complexity : A Modern Approach (PDF) (draft ed.), p. 76, ISBN 9780511804090
3. Immerman, Neil (1988), "Nondeterministic space is closed under complementation" (PDF), SIAM Journal on Computing, 17 (5): 935–938, doi:10.1137/0217058, MR 0961049
4. Szelepcsényi, Róbert (1987), "The method of forcing for nondeterministic automata", Bulletin of the EATCS, 33: 96–100
5. Nisan, Noam (1992), "RL ⊆ SC", Proceedings of the 24th ACM Symposium on Theory of computing (STOC '92), Victoria, British Columbia, Canada, pp. 619–623, doi:10.1145/129712.129772, S2CID 11651375{{citation}}: CS1 maint: location missing publisher (link).
6. Reingold, Omer; Trevisan, Luca; Vadhan, Salil (2006), "Pseudorandom walks on regular digraphs and the RL vs. L problem" (PDF), STOC'06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, New York: ACM, pp. 457–466, doi:10.1145/1132516.1132583, MR 2277171, S2CID 17360260
| Wikipedia |
\begin{document}
\baselineskip=12pt
\title{A characterisation of elementary abelian $3$-groups}
\author{C. S. Anabanti\footnote{The author is supported by a Birkbeck PhD Scholarship.}\\ \centering{\footnotesize The author dedicates this paper to Professor Sarah Hart with admiration and respect.}}
\date{}
\maketitle
\renewcommand{\arabic{footnote}}{}
\footnote{2010 \emph{Mathematics Subject Classification}: Primary 11B75; Secondary 20D60, 20K01, 05E15.}
\footnote{\emph{Key words and phrases}: Sum-free sets, maximal sum-free sets, elementary abelian groups.}
\renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0}
\begin{abstract} \noindent T{\u a}rn{\u a}uceanu [Archiv der Mathematik, \textbf{102 (1)}, (2014), 11--14] gave a characterisation of elementary abelian $2$-groups in terms of their maximal sum-free sets. His theorem states that a finite group $G$ is an elementary abelian $2$-group if and only if the set of maximal sum-free sets coincides with the set of complements of the maximal subgroups. A corollary is that the number of maximal sum-free sets in an elementary abelian $2$-group of finite rank $n$ is $2^n-1$. Regretfully, we show here that the theorem is wrong. We then prove a correct version of the theorem from which the desired corollary can be deduced. Moreover, we give a characterisation of elementary abelian $3$-groups in terms of their maximal sum-free sets. A corollary to our result is that the number of maximal sum-free sets in an elementary abelian $3$-group of finite rank $n$ is $3^n-1$. Finally, for prime $p>3$ and $n\in \mathbb{N}$, we show that there is no direct analogue of this result for elementary abelian $p$-groups of finite rank $n$. \end{abstract}
\section{Preliminaries} The well-known result of Schur which says that whenever we partition the set of positive integers into a finite number of parts, at least one of the parts contains three integers $x,y$ and $z$ such that $x+y=z$ introduced the study of sum-free sets. Schur \cite{S1917} gave the result while showing that the Fermat's last theorem does not hold in $\mathbb{F}_p$ for sufficiently large $p$. The result was later extended to groups as follows: A non-empty subset $S$ of a group $G$ is sum-free if for all $s_1,s_2 \in S$, $s_1s_2\notin S$. (Note that the case $s_1=s_2$ is included in this restriction.) An example of a sum-free set in a finite group $G$ is any non-trivial coset of a subgroup of $G$. Sum-free sets have applications in Ramsey theory and are also closely related to the widely studied concept of caps in finite geometry. Some questions that appear interesting in the study of sum-free sets are: (i) How large can a sum-free set in a finite group be? (ii) Which finite groups contain maximal by inclusion sum-free sets of small sizes? (iii) How many maximal by cardinality sum-free sets are there in a given finite group?
Each of these questions has been attempted by several researchers; though none is fully answered. For question (i), Diananda and Yap \cite{DY1969}, in 1969, following an earlier work of Yap \cite{Y1969}, determined the sizes of maximal by cardinality sum-free sets in finite abelian groups $G$, where $|G|$ is divisible by a prime $p \equiv 2$ mod $3$, and where $|G|$ has no prime factor $p\equiv 2$ mod $3$ but $3$ is a factor of $|G|$. They gave a good bound in the case where every prime factor of $|G|$ is congruent to $1~mod~3$. Green and Rusza \cite{GR2005} in 2005 completely answered question (i) in the finite abelian case. The question is still open for the non-abelian case, even though there has been some progress by Kedlaya \cite{K1997,K1998}, Gowers \cite{G2008}, among others. For question (ii), Street and Whitehead \cite{SW1974} began research in that area in 1974. They called a maximal by inclusion sum-free set, a locally maximal sum-free set (LMSFS for short), and calculated all LMSFS in groups of small orders, up to $16$ in \cite{SW1974,SW1974A} as well as a few higher sizes. In 2009, Giudici and Hart \cite{GH2009} started the classification of finite groups containing LMSFS of small sizes. Among other results, they classified all finite groups containing LMSFS of sizes $1$ and $2$, as well as some of size $3$. The size $3$ problem was resolved in \cite{AH2016}. Question (ii) is still open for sizes $k\geq 4$.\\ \\ To be consistent with our notations, we will use the term `maximal' to mean `maximal by cardinality' and `locally maximal' to mean `maximal by inclusion'. T{\u a}rn{\u a}uceanu \cite{T2014} in 2014 gave a characterisation of elementary abelian $2$-groups in terms of their maximal sum-free sets. His theorem (see Theorem 1.1 of \cite{T2014}) states that ``a finite group $G$ is an elementary abelian $2$-group if and only if the set of maximal sum-free sets coincides with the set of complements of the maximal subgroups". The author of \cite{T2014} didn't define the term maximal sum-free sets. Unfortunately, the theorem is false whichever definition is used. If we take `maximal' in the theorem to mean `maximal by cardinality', then a counter example is the cyclic group $C_4$ of order $4$, given by $C_4=\langle x \mid x^4=1\rangle$. Here, there is a unique maximal (by cardinality) sum-free set namely $\{x,x^3\}$, and it is the complement of the unique maximal subgroup. But $C_4$ is not elementary abelian. On the other hand, if we take `maximal' to mean `maximal by inclusion', then the theorem will still be wrong since $S=\{x_1,x_2,x_3,x_4,x_1x_2x_3x_4\}$ is a maximal by inclusion sum-free set in $C_2^4=\langle x_1,x_2,x_3,x_4 \mid x_i^2=1,x_ix_j=x_jx_i \text{ for } 1 \leq i,j \leq 4 \rangle$, but does not coincide with any complement of a maximal subgroup of $C_2^4$. \\\\ For a prime $p$ and $n\in \mathbb{N}$, we write $\mathbb{Z}_p^n$ for the elementary abelian $p$-group of finite rank $n$. We recall here that the number of maximal subgroups of $\mathbb{Z}_p^n$ is $\sum\limits_{k=0}^{n-1}p^k$. In this paper, we give a correction to Theorem 1.1 of \cite{T2014} which will then make its desired corollary hold. For the rest of this section, we state the main result of this paper and its immediate corollary. Recall that $\Phi(G)$ is the Frattini subgroup of $G$. \begin{theorem}\label{thm1} A finite group $G$ is an elementary abelian $3$-group if and only if the set of non-trivial cosets of each maximal subgroup of $G$ coincides with two maximal sum-free sets in $G$, every maximal sum-free set is a non-trivial coset of a maximal subgroup, and $\Phi(G)=1$. \end{theorem}
\begin{corollary} The number of maximal sum-free sets in $\mathbb{Z}_3^n$ is $3^n-1$. \end{corollary} \begin{proof} As the number of maximal subgroups of $\mathbb{Z}_3^n$ is $\frac{3^n-1}{2}$, it follows immediately from Theorem \ref{thm1} that the number of maximal sum-free sets in $\mathbb{Z}_3^n$ is $2(\frac{3^n-1}{2})=3^n-1$. \end{proof}
\section{Main results} \noindent Let $S$ be a sum-free set in a finite group $G$. We define $SS=\{xy \mid x,y \in S\}$, $S^{-1}=\{x^{-1} \mid x\in S\}$ and $SS^{-1}=\{xy^{-1} \mid x,y \in S\}$. Clearly, $S\cap SS=\varnothing$. Moreover, $S\cap SS^{-1}=\varnothing$ as well; for if $x,y,z\in S$ with $x=yz^{-1}$, then $xz=y$, contradicting the fact that $S$ is sum-free.
\subsection{Correction to Theorem 1.1 of \cite{T2014}} We begin with a remark that what is missing in the statement of Theorem 1.1 of \cite{T2014} is the assumption that $\Phi(G)=1$, where $\Phi(G)$ denotes the Frattini subgroup of $G$. A correction to Theorem 1.1 of \cite{T2014} is the following (from where the suggested corollary holds):
\begin{theorem}[The Correction]\label{T1} A finite group $G$ is an elementary abelian $2$-group if and only if the set of maximal sum-free sets coincides with the set of complements of the maximal subgroups, and $\Phi(G)=1$. \end{theorem}
\begin{remark}\label{R1}
(a) Let $G$ be a finite group and $S$ a sum-free set in $G$. For $x_1\in S$, define $x_1S:=\{x_1x_2|x_2 \in S\}$. As $|x_1S|=|S|$ and $S \cup x_1S \subseteq G$, with $S \cap x_1S=\varnothing$, we have that $2|S|\leq |G|$; so $|S|\leq \frac{|G|}{2}$. This shows that the size of a sum-free set in $G$ is at most $\frac{|G|}{2}$.\\ (b) We recall Lemma 3.1 of \cite{GH2009} which says that a sum-free set $T$ in a finite group $G$ is locally maximal if and only if $G=T \cup TT \cup TT^{-1} \cup T^{-1}T \cup \sqrt{T}$, where $\sqrt{T}=\{x\in G \mid x^2 \in T\}$. Now, let $S$ be a maximal sum-free set in $G=\mathbb{Z}_2^n$. As every maximal sum-free set is locally maximal and $SS=SS^{-1}=S^{-1}S$, with $\sqrt{S}=\varnothing$, Lemma 3.1 of \cite{GH2009} yields that $G=S\dot\cup SS$. \end{remark}
We now give a proof of Theorem \ref{T1}
\begin{proof}
Let $G=\mathbb{Z}_2^n$, and $N$ be a maximal subgroup of $G$. Clearly, $|N|=\frac{|G|}{2}$. Let $M$ be the non-trivial coset of $N$ in $G$. Then $M$ is sum-free of size $\frac{|G|}{2}$ in $G$. By Remark \ref{R1}(a) therefore, $M$ is a maximal sum-free set in $G$. So each maximal subgroup of $G$ has its complement as a maximal sum-free set in $G$.
Next, we show that every maximal sum-free set in $G$ is the complement of a maximal subgroup of $G$. Let $S$ be a maximal sum-free set in $G$, and let $x\in S$ be arbitrary. From $xS \subseteq SS$, we obtain that $|xS|\leq |SS|$, and from Remark \ref{R1}(b) that $G=S\dot\cup SS$ and the fact that $|S|=\frac{|G|}{2}$, we obtain that $|SS|\leq |G|-|S|=|S|=|xS|$. Therefore $xS=SS$, and $G=S\dot\cup xS$. Define $H:=xS$. To show that $H$ is a subgroup of $G$, we simply show that $H$ is closed. Let $a$ and $b$ be elements of $H$. Then $a=xy$ and $b=xz$ for some $y,z\in S$. So $ab=yz\not\in S$. Hence $ab\in H$, and $H$ is closed. Thus $H$ is a subgroup of $G$. The fact that $H$ is a maximal subgroup of $G$ follows from the definition of $H$. Clearly, $S$ is the complement of $H$ in $G$ as desired.
The last part of the result that $\Phi(G)=1$ follows from the fact that the intersection of maximal subgroups of $G$ is trivial. For the converse, suppose $G$ is a finite group such that the set of maximal sum-free sets in $G$ are precisely the complements of the maximal subgroups of $G$, and $\Phi(G)=1$. Remark \ref{R1}(a) tells us that any maximal sum-free set in $G$ has size at most $\frac{|G|}{2}$. Therefore the complement of the maximal subgroups must have size at most $\frac{|G|}{2}$, and hence every maximal subgroup is of index $2$ in $G$. Now, let $R$ be a Sylow $2$-subgroup of $G$. If $G$ is not a $2$-group, then $R$ is contained in a maximal subgroup of $G$ whose index must be odd; a contradiction. Therefore $G$ is a $2$-group. It is a basic result in group theory that for a $p$-group $P$, the quotient $P/\Phi(P)$ is always elementary abelian. As $\Phi(G)=1$, we conclude that $G$ is an elementary abelian $2$-group.
\end{proof}
\subsection{Proof of Theorem \ref{thm1}}
\begin{lemma}\label{N1} Let $S$ be sum-free in $G=\mathbb{Z}_3^n$ ($n\in \mathbb{N}$), and let $x\in S$. Then the following hold:\\ (i) any two sets in $\{S,x^{-1}S,xS\}$ are disjoint; (ii) any two sets in $\{S,SS^{-1},S^{-1}\}$ are disjoint.\\ Moreover, if $S$ is maximal, then the following also hold:\\
(iii) $S \cup x^{-1}S \cup xS=G$ and $|S|=\frac{|G|}{3}$; (iv) $S\cup SS^{-1} \cup S^{-1}=G$. \end{lemma}
\begin{proof}
(i) As $S$ is sum-free, $S\cap xS=\varnothing=S\cap x^{-1}S$. So we only need to show that $xS \cap x^{-1}S = \varnothing$. Suppose for contradiction that $xS \cap x^{-1}S \neq \varnothing$. Then there exist $y,z \in S$ such that $xy=x^{-1}z$. This means that $y=xz$; a contradiction. Therefore $xS \cap x^{-1}S= \varnothing$. The proof of (ii) is similar to (i). For (iii), as $S \cup x^{-1}S \cup xS \subseteq G$, we have that $3|S|\leq |G|$; whence $|S|\leq \frac{|G|}{3}$.
Each maximal subgroup of $G$ has size $\frac{|G|}{3}$. As any non-trivial coset of such a subgroup is sum-free and has size $\frac{|G|}{3}$; such a coset of the maximal subgroup must be maximal sum-free. Thus, $|S|=\frac{|G|}{3}$, and $S \cup x^{-1}S \cup xS=G$. The proof of (iv) is similar. \end{proof}
\begin{proposition}\label{N2} Suppose $S$ is a maximal sum-free set in an elementary abelian $3$-group $G$, and let $x\in S$. Then the following hold: (i) $x^{-1}S=S^{-1}S$; (ii) $xS=S^{-1}=SS$. \end{proposition}
\begin{proof} Let $S$ be a maximal sum-free set in an elementary abelian $3$-group $G$, and $x\in S$.\\
(i) Clearly, $x^{-1}S \subseteq S^{-1}S$; therefore $|x^{-1}S| \leq |S^{-1}S|$. By Lemma \ref{N1}(iv), $|S^{-1}S|\leq |G|-(|S|+|S^{-1}|)=3|S|-2|S|=|S|=|x^{-1}S|$. Therefore, $|x^{-1}S|=|S^{-1}S|$; whence $x^{-1}S=S^{-1}S$.\\ (ii) Let $y\in xS$. By Lemma \ref{N1}(i) and Proposition \ref{N2}(i), we have that $y\not\in (S\dot\cup SS^{-1})$. So Lemma \ref{N1}(iv) tells us that $y\in S^{-1}$, and we conclude that $xS\subseteq S^{-1}$. On the other hand, if $y\in S^{-1}$, then Lemma \ref{N1}(ii), Proposition \ref{N2}(i) and Lemma \ref{N1}(iii) yield $y\in xS$; so $S^{-1}\subseteq xS$. Therefore $xS=S^{-1}$. Now, \begin{equation}\label{E1} SS=\bigcup_{x\in S}xS=\bigcup_{x\in S}S^{-1}=S^{-1}. \end{equation} Thus, $xS=S^{-1}=SS$ as required. \end{proof}
\noindent Suppose $p$ is the smallest prime divisor of the order of a finite group $G$, and $H$ is a subgroup of index $p$ in $G$. Then $H$ is normal in $G$. This fact is well-known but we include a short proof for the reader's convenience. Suppose for a contradiction that $H$ is not normal.
Then for some $g\in G$, we have $H^g\neq H$. But $|H^gH|=\frac{|H^g||H|}{|H^g \cap H|}$ $=\frac{|H|^2}{|H^g \cap H|}$ $=|H|\frac{|H|}{|H ^g\cap H|}$ $\geq$ $|H|p=|G|$; thus $H^gH=G$. Therefore, $g=(gh_1g^{-1})h_2$ for some $h_1,h_2\in H$. So $g=h_2h_1\in H$, and we conclude that $H^g=H$; a contradiction. Therefore $H$ is normal in $G$. \\
We now give a proof of Theorem \ref{thm1} \begin{proof} Let $G$ be an elementary abelian $3$-group of finite rank $n$. Clearly, every maximal subgroup of $G$ has size $3^{n-1}$; so is associated with two non-trivial cosets, which are maximal sum-free sets. Next, we show that every maximal sum-free set in $G$ is a non-trivial coset of a maximal subgroup of $G$. Suppose $S$ is a maximal sum-free set in $G$. Let $x\in S$ be arbitrary, and define $H:=x^{-1}S$. We show that $H$ is a subgroup of $G$. To do this, we show that $H$ is closed. Let $a$ and $b$ be elements of $H$. Then $a=x^{-1}y$ and $b=x^{-1}z$ for some $y,z\in S$. Since $ab=x^{-1}(x^{-1}yz)$, it is sufficient to show that $x^{-1}yz\in S$. Recall from Lemma \ref{N1}(iii) that $G=S\cup x^{-1}S\cup xS$. From Proposition \ref{N2}(ii) therefore, $G=S\cup x^{-1}S\cup S^{-1}$. Now, suppose $x^{-1}yz\in x^{-1}S$. Then there exists $q\in S$ such that $x^{-1}yz= x^{-1}q$. This implies that $yz=q$; a contradiction. Next suppose $x^{-1}yz\in S^{-1}$. Then there exists $q\in S$ such that $x^{-1}yz= q^{-1}$. So $yz=xq^{-1}$, and we obtain that $x^{-1}q=y^{-1}z^{-1}=(yz)^{-1}$; a contradiction as $x^{-1}q\in x^{-1}S$, $(yz)^{-1}\in (SS)^{-1}=S$ by Equation \ref{E1}, and Lemma \ref{N1}(i) tells us that $x^{-1}S\cap S=\varnothing$.
We have shown that $x^{-1}yz\not\in x^{-1}S\cup S^{-1}$. In the light of $G=S\cup x^{-1}S\cup S^{-1}$ therefore, $x^{-1}yz\in S$; whence, $H$ is closed. So $H$ is a subgroup of $G$. As $|H|=|x^{-1}S|=|S|=\frac{|G|}{3}$, we conclude that $H$ is a maximal subgroup of $G$, and $S=xH$ is a non-trivial coset of $H$ in $G$. So we have shown now that every maximal sum-free set in $G$ is a non-trivial coset of a maximal subgroup of $G$. The third part that $\Phi(G)=1$ follows from the fact that the intersection of maximal subgroups of $G$ is trivial. Conversely, suppose $G$ is a finite group such that the set of non-trivial cosets of each maximal subgroup of $G$ coincides with two maximal sum-free sets in $G$, every maximal sum-free set of $G$ is a coset of a maximal subgroup of $G$, and $\Phi(G)=1$. First and foremost, $G$ has no subgroup of index $2$; otherwise it will have a maximal sum-free set which is not a coset of a subgroup of index $3$. As the smallest index of a maximal subgroup of $G$ is $3$, any such subgroup must be normal in $G$. Let $H$ be a Sylow $3$-subgroup of $G$. Then either $H=G$ or $H$ is contained in a maximal subgroup (say $M$) of $G$. Suppose $H$ is contained in such maximal subgroup $M$. As $|G/M|=3$, we deduce immediately that $|G:H|$ is divisible by $3$; a contradiction! Therefore, $H=G$, and we conclude that $G$ is a $3$-group. Now, $G$ is an elementary abelian $3$-group follows from the fact that $\Phi(G)=1$ and $P/\Phi(P)$ is elementary abelian for every $p$-group $P$. \end{proof}
\noindent In conclusion, if $G=\mathbb{Z}_p^n$ for prime $p>3$ and $n\in \mathbb{N}$, then there exists a normal subgroup $N$ of $G$ such that $G/N\cong C_p$, and $C_p$ has a maximal sum-free set of size at least $2$ (the latter fact follows from the classification of groups containing maximal by inclusion sum-free sets of size $1$ in \cite[Theorem 4.1]{GH2009}).
The union of non-trivial cosets of $N$ corresponding to this maximal sum-free set of $C_p$ is itself sum-free in $G$. So $G$ has a maximal sum-free set of size at least $2|N|$. This argument shows that a direct analogue of Theorem \ref{thm1} is not possible for elementary abelian $p$-groups, where $p>3$ and prime.
\noindent \textbf{Chimere Stanley Anabanti}\\
Birkbeck, University of London\\
[email protected]
\end{document} | arXiv |
The seminar on mathematical physics will be held on select Mondays and Wednesdays from 12 – 1pm in CMSA Building, 20 Garden Street, Room G10. This year's Seminar will be organized by Artan Sheshmani and Yang Zhou.
The list of speakers for the upcoming academic year will be posted below and updated as details are confirmed. Titles and abstracts for the talks will be added as they are received.
Date Speaker Title/Abstract
9/10/2018 Xiaomeng Xu, MIT Title: Stokes phenomenon, Yang-Baxter equations and Gromov-Witten theory.
Abstract: This talk will include a general introduction to a linear differential system with singularities, and its relation with symplectic geometry, Yang-Baxter equations, quantum groups and 2d topological field theories.
9/17/2018 Gaetan Borot, Max Planck Institute
Video Title: A generalization of Mirzakhani's identity, and geometric recursion
Abstract: McShane obtained in 1991 an identity expressing the function 1 on the Teichmueller space of the once-punctured torus as a sum over simple closed curves. It was generalized to bordered surfaces of all topologies by Mirzakhani in 2005, from which she deduced a topological recursion for the Weil-Petersson volumes. I will present new identities which represent linear statistics of the simple length spectrum as a sum over homotopy class of pairs of pants in a hyperbolic surface, from which one can deduce a topological recursion for their average over the moduli space. This is an example of application of a geometric recursion developed with Andersen and Orantin.
9/24/2018 Yi Xie, Simons Center Title: sl(3) Khovanov module and the detection of planar theta-graph
Abstract: In this talk we will show that Khovanov's sl(3) link homology together with its module structure can be generalized for spatial webs (bipartite trivalent graphs).We will also introduce a variant called pointed sl(3) Khovanov homology. Those two combinatorial invariants are related to Kronheimer-Mrowka's instanton invariants $J^\sharp$ and $I^\sharp$ for spatial webs by two spectral sequences. As an application, we will prove that sl(3) Khovanov module and pointed sl(3) Khovanov homology both detect the planar theta graph.
10/01/2018 Dori Bejleri, MIT Title: Stable pair compactifications of the moduli space of degree one del Pezzo surfaces via elliptic fibrations
Abstract: A degree one del Pezzo surface is the blowup of P^2 at 8 general points. By the classical Cayley-Bacharach Theorem, there is a unique 9th point whose blowup produces a rational elliptic surface with a section. Via this relationship, we can construct a stable pair compactification of the moduli space of anti-canonically polarized degree one del Pezzo surfaces. The KSBA theory of stable pairs (X,D) is the natural extension to dimension 2 of the Deligne-Mumford-Knudsen theory of stable curves. I will discuss the construction of the space of interest as a limit of a space of weighted stable elliptic surface pairs and explain how it relates to some previous compactifications of the space of degree one del Pezzo surfaces. This is joint work with Kenny Ascher.
10/08/2018 Pei-Ken Hung, MIT Title: The linear stability of the Schwarzschild spacetime in the harmonic gauge: odd part
Abstract: We study the odd solution of the linearlized Einstein equation on the Schwarzschild background and in the harmonic gauge. With the aid of Regge-Wheeler quantities, we are able to estimate the odd part of Lichnerowicz d'Alembertian equation. In particular, we prove the solution decays at rate $\tau^{-1+\delta}$ to a linearlized Kerr solution.
10/15/2018 Chris Gerig, Harvard Title: A geometric interpretation of the Seiberg-Witten invariants
Abstract: Whenever the Seiberg-Witten (SW) invariants of a 4-manifold X are defined, there exist certain 2-forms on X which are symplectic away from some circles. When there are no circles, i.e. X is symplectic, Taubes' "SW=Gr" theorem asserts that the SW invariants are equal to well-defined counts of J-holomorphic curves (Taubes' Gromov invariants). In this talk I will describe an extension of Taubes' theorem to non-symplectic X: there are well-defined counts of J-holomorphic curves in the complement of these circles, which recover the SW invariants. This "Gromov invariant" interpretation was originally conjectured by Taubes in 1995. This talk will involve contact forms and spin-c structures.
*Room G02* Sze Ning Mak, Brown Title: Tetrahedral geometry in holoraumy spaces of 4D, $\mathcal{N}=1$ and $\mathcal{N}=2$ minimal supermultiplets
Abstract: In this talk, I will review the supersymmetry algebra. For Lie algebras, the concepts of weights and roots play an important role in the classification of representations. The lack of linear "eigen-equations" in supersymmetry leads to the failure to realize the Jordan-Chevalley decomposition of ordinary Lie algebras on the supersymmetry algebra. Therefore, we introduce the concept "holoraumy" for the 4D, $\mathcal{N}$-extended supersymmetry algebras, which allows us to explore the possible representations of supersymmetric systems of a specified size. The coefficients of the holoraumy tensors for different representations of the same size form a lattice space. For 4D, $\mathcal{N}=1$ minimal supermultiplets (4 bosons + 4 fermions), a tetrahedron is found in a 3D subspace of the 4D lattice parameter space. For 4D, $\mathcal{N}=2$ minimal supermultiplets (8 bosons + 8 fermions), 4 tetrahedrons are found in 4 different 3D subspaces of a 16D lattice parameter space.
10/29/2018 Francois Greer, Simons Center Title: Rigid Varieties with Lagrangian Spheres
Abstract: Let X be a smooth complex projective variety with its induced Kahler structure. If X admits an algebraic degeneration to a nodal variety, then X contains a Lagrangian sphere as the vanishing cycle. Donaldson asked whether the converse holds. We answer this question in the negative by constructing rigid complex threefolds with Lagrangian spheres using Teichmuller curves in genus 2.
11/05/2018 Siqi He, Simons Center Title: The Kapustin-Witten Equations, Opers and Khovanov Homology
Abstract: We will discuss a Witten's gauge theory program to define Jones polynomial and Khovanov homology for knots inside of general 3-manifolds by counting singular solutions to the Kapustin-Witten or Haydys-Witten equations. We will prove that the dimension reduction of the solutions moduli space to the Kapustin-Witten equations can be identified with Beilinson-Drinfeld Opers moduli space. We will also discuss the relationship between the Opers and a symplectic geometry approach to define the Khovanov homology for 3-manifolds. This is joint work with Rafe Mazzeo.
11/12/2018 No Seminar
11/19/2018 Yusuf Barış Kartal, MIT Title: Distinguishing symplectic fillings using dynamics of Fukaya categories
Abstract: The purpose of this talk is to produce examples of symplectic fillings that cannot be distinguished by the dynamical invariants at a geometric level, but that can be distinguished by the dynamics and deformation theory of (wrapped) Fukaya categories. More precisely, given a Weinstein domain $M$ and a compactly supported symplectomorphism $\phi$, one can produce another Weinstein domain $T_\phi$-\textbf{the open symplectic maping torus}. Its contact boundary is independent of $\phi$ and it is the same as the boundary of $T_0\times M$, where $T_0$ is the once punctured torus. We will outline a method to distinguish $T_\phi$ from $T_0\times M$. This will involve the construction of a mirror symmetry inspired algebro-geometric model related to Tate curve for the Fukaya category of $T_\phi$ and exploitation of dynamics on these models to distinguish them.
11/26/2018 Charles Doran (fill-in)Andreas Malmendier, Utah State (originally)
Video Speaker: Charles Doran
Title: Feynman Amplitudes from Calabi-Yau Fibrations
Abstract: This talk is a last-minute replacement for the originally scheduled seminar by Andreas Malmendier.
After briefly reviewing the interpretation of Feynman amplitudes as periods of graph hypersurfaces, we will focus on a class of graphs called the n-loop sunset (or banana) graphs. For these graphs, the underlying geometry consists of very special families of (n-1)-dimensional Calabi-Yau hypersurfaces of degree n+1 in projective n-space. We will present a reformulation using fibrations induced from toric geometry, which implies a simple, iterative construction of the corresponding Feynman integrals to all loop orders. We will then reinterpret the mass-parameter dependence in the case of the 3-loop sunset in terms of moduli of lattice-polarized elliptic fibered K3 surfaces, and describe a method to construct their Picard-Fuchs equations. (As it turns out, the 3-loop sunset K3 surfaces are all specializations of those constructed by Clingher-Malmendier in the originally scheduled talk!) This is joint work with Andrey Novoseltsev and Pierre Vanhove
Speaker: Andreas Malmendier
Title: (1,2) polarized Kummer surfaces and the CHL string
Abstract: A smooth K3 surface obtained as the blow-up of the quotient of a four-torus by the involution automorphism at all 16 fixed points is called a Kummer surface. Kummer surface need not be algebraic, just as the original torus need not be. However, algebraic Kummer surfaces obtained from abelian varieties provide a fascinating arena for string compactification as they are not trivial spaces but are sufficiently simple for one to be able to analyze most of their properties in detail.
In this talk, we give an explicit description for the relation between algebraic Kummer surfaces of Jacobians of genus-two curves with principal polarization and those associated to (1, 2)-polarized abelian surfaces from three different angles: the point of view of 1) the binational geometry of quartic surfaces in P^3 using even-eights, 2) elliptic fibrations on K3 surfaces of Picard-rank 17 over P^1 using Nikulin involutions, 3) theta-functions of genus-two using two-isogeny. Finally, we will explain how these (1,2)-polarized Kummer surfaces naturally appear as F-theory backgrounds for the so-called CHL string. (This is joint work with Adrian Clingher.)
12/03/2018 Monica Pate, Harvard Title: Gravitational Memory in Higher Dimensions
Abstract: A precise equivalence among Weinberg's soft graviton theorem, supertranslation conservation laws and the gravitational memory effect was previously established in theories of asymptotically flat gravity in four dimensions. Moreover, this triangle of equivalence was proposed to be a universal feature of generic theories of gauge and gravity. In theories of gravity in even dimensions greater than four, I will show that there exists a universal gravitational memory effect which is precisely equivalent to the soft graviton theorem in higher dimensions and a set of conservation laws associated to infinite-dimensional asymptotic symmetries.
12/10/2018 Fenglong You, University of Alberta Title: Relative and orbifold Gromov-Witten theory
Abstract: Given a smooth projective variety X and a smooth divisor D \subset X, one can study the enumerative geometry of counting curves in X with tangency conditions along D. There are two theories associated to it: relative Gromov-Witten invariants of (X,D) and orbifold Gromov-Witten invariants of the r-th root stack X_{D,r}. For sufficiently large r, Abramovich-Cadman-Wise proved that genus zero relative invariants are equal to the genus zero orbifold invariants of root stacks (with a counterexample in genus 1). We prove that higher genus orbifold Gromov-Witten invariants of X_{D,r} are polynomials in r and the constant terms are exactly higher genus relative Gromov-Witten invariants of (X,D). If time permits, I will also talk about further results in genus zero which allows us to study structures of genus zero relative Gromov-Witten theory. This is based on joint work with Hisan-Hua Tseng, Honglu Fan and Longting Wu.
1/28/2019 Per Berglund (University of New Hampshire) Title: A Generalized Construction of Calabi-Yau Manifolds and Mirror Symmetry
Abstract: We extend the construction of Calabi-Yau manifolds to hypersurfaces in non-Fano toric varieties. This provides a generalization of Batyrev's original work, allowing us to construct new pairs of mirror manifolds. In particular, we find novel K3-fibered Calabi-Yau manifolds, relevant for type IIA/heterotic duality in d=4, N=2, string compactifications. We also calculate the three-point functions in the A-model following Morrison-Plesser, and find perfect agreement with the B-model result using the Picard-Fuchs equations on the mirror manifold.
2/4/2019 Netanel (Nati) Rubin-Blaier (Cambridge) Title: Abelian cycles, and homology of symplectomorphism groups
Abstract: Based on work of Kawazumi-Morita, Church-Farb, and N. Salter in the classical case of Riemann surfaces, I will describe a technique which allows one to detect some higher homology classes in the symplectic Torelli group using parametrized Gromov-Witten theory. As an application, we will consider the complete intersection of two quadrics in $P^5$, and produce a non-trivial lower bound for the dimension of the 2nd group homology of the symplectic Torelli group (relative to a fixed line) with rational coefficients.
2/11/2019 Tristan Collins (MIT) Title: Stability and Nonlinear PDE in mirror symmetry
Abstract: A longstanding problem in mirror symmetry has been to understand the relationship between the existence of solutions to certain geometric nonlinear PDES (the special Lagrangian equation, and the deformed Hermitian-Yang-Mills equation) and algebraic notions of stability, mainly in the sense of Bridgeland. I will discuss progress in this direction through ideas originating in infinite dimensional GIT. This is joint work with S.-T. Yau.
2/25/2019 Hossein Movasati (IMPA) Title: Modular vector fields
Abstract: Using the notion of infinitesimal variation of Hodge structures I will define an R-variety which generalizes Calabi-Yau and abelian varieties, cubic four, seven and ten folds, etc. Then I will prove a theorem concerning the existence of certain vector fields in the moduli of enhanced R-varieties. These are algebraic incarnation of differential equations of the generating functions of GW invariants (Lian-Yau 1995), Ramanujan's differential equation between Eisenstein series (Darboux 1887, Halphen 1886, Ramanujan 1911), differential equations of Siegel modular forms (Resnikoff 1970, Bertrand-Zudilin 2005).
3/4/2019 Zhenkun Li (MIT) Title: Cobordism and gluing maps in sutured monopoles and applications.
Abstract: The sutured monopole Floer homology was constructed by Kronheimer and Mrowka on balanced sutured manifolds. Floer homologies on closed three manifolds are functors from oriented cobordism category to the category of modules over suitable rings. It is natural to ask whether the sutured monopole Floer homology can be viewed as a functor similarly. In the talk we will answer this question affirmatively.
In order to study the above problem, we will need to use an important tool called the gluing maps. Gluing maps were constructed in the Heegaard Floer theory by Honda, Kazez and Matić , while were previously unknown in the monopole theory. In the talk we will also explain how to construct such gluing maps in monopoles and how to use them to define a minus version of knot monopole Floer homology.
3/11/2019 Yu Pan (MIT) Title: Augmentations and exact Lagrangian cobordisms.
Abstract: Augmentations are tightly connected to embedded exact Lagrangian fillings. However, not all the augmentations of a Legendrian knot come from embedded exact Lagrangian fillings. In this talk, we introduce immersed exact Lagrangian fillings into the picture and show that all the augmentations come from possibly immersed exact Lagrangian fillings. In this way, we realize augmentations, which is an algebraic object, fully geometrically. This is a joint work with Dan Rutherford working in progress.
3/25/2019 Eduardo Gonzalez (UMass Boston) Title: Stratifications in gauged Gromov-Witten theory.
Abstract: Let G be a reductive group and X be a smooth projective G-variety. In classical geometric invariant theory (GIT), there are stratifications of X that can be used to understand the geometry of the GIT quotients X//G and their dependence on choices. In this talk, after introducing basic theory, I will discuss the moduli of gauged maps, their relation to the Gromov-Witten theory of GIT quotients X//G and work in progress regarding stratifications of the moduli space of gauged maps as well as possible applications to quantum K-theory. This is joint work with D. Halpern-Leistner, P. Solis and C. Woodward.
4/1/2019 Athanassios S. Fokas (University of Cambridge) Title: Asymptotics: the unified transform, a new approach to the Lindelöf Hypothesis,and the ultra-relativistic limit of the Minkowskian approximation of general relativity
Abstract: Employing standard, as well as novel techniques of asymptotics, three different problems will be discussed: (i) The computation of the large time asymptotics of initial-boundary value problems via the unified transform (also known as the Fokas method, www.wikipedia.org/wiki/Fokas_method)[1]. (ii) The evaluation of the large t-asymptotics to all orders of the Riemann zeta function[2], and the introduction of a new approach to the Lindelöf Hypothesis[3]. (iii) The proof that the ultra relativistic limit of the Minkowskian approximation of general relativity [4] yields a force with characteristics of the strong force, including confinement and asymptotic freedom[5].
[1] J. Lenells and A. S. Fokas. The Nonlinear Schrödinger Equation
with t-Periodic Data: I. Exact Results, Proc. R. Soc. A 471, 20140925
J. Lenells and A. S. Fokas, The Nonlinear Schrödinger Equation with
t-Periodic Data: II. Perturbative Results, Proc. R. Soc. A 471,
20140926 (2015).
[2] A.S. Fokas and J. Lenells, On the Asymptotics to All Orders of the
Riemann Zeta Function and of a Two-Parameter Generalization of the
Riemann Zeta Function, Mem. Amer. Math. Soc. (to appear).
[3] A.S. Fokas, A Novel Approach to the Lindelof Hypothesis,
Transactions of Mathematics and its Applications (to appear).
[4] L. Blanchet and A.S. Fokas, Equations of Motion of
Self-Gravitating N-Body Systems in the First Post-Minkowskian
Approximation, Phys. Rev. D 98, 084005 (2018).
[5] A.S. Fokas, Super Relativistic Gravity has Properties Associated
with the Strong Force, Eur. Phys. J. C (to appear).
4/8/2019 Yoosik Kim (Boston University) Title: String polytopes and Gelfand-Cetlin polytopes
Abstract: The string polytope was introduced by Littelmann and Berenstein–Zelevinsky as a generalization of the Gelfand-Cetlin polytope in representation theory. For a connected reductive algebraic group $G$ over $\mathbb{C}$ and a dominant integral weight $\lambda$, a choice of a reduced word of the longest element in the Weyl group of G determines a string polytope. Depending on a reduced word of the longest element in the Weyl group, combinatorially distinct string polytopes arise in general. In this talk, I will explain how to classify the string polytopes that are unimodularly equivalent to Gelfand-Cetlin polytopes when $G = \mathrm{SL}_{n+1}(\mathbb{C})$ and $\lambda$ is a regular dominant integral weight. Also, I will explain a conjectural way obtaining SYZ mirrors respecting a cluster structure invented by Fomin–Zelevinsky. This talk is based on joint work with Yunhyung Cho, Eunjeong Lee, and Kyeong-Dong Park.
Room G02 Junliang Shen (MIT) Title: Perverse sheaves in hyper-Kähler geometry
Abstract: I will discuss the role played by perverse sheaves in the study of topology and geometry of hyper-Kähler manifolds. Motivated by the P=W conjecture, we establish a connection between topology of Lagrangian fibrations and Hodge theory using perverse filtrations. Our method gives new structural results for topology of Lagrangian fibrations associated with hyper-Kähler varieties. If time permits, I will also discuss connections to enumerative geometry of Calabi-Yau 3-folds. Based on joint work with Qizheng Yin.
4/22/2019 Yang Zhou (CMSA) Title: Quasimap wall-crossing for GIT quotients
Abstract: For a large class of GIT quotients X=W//G, Ciocan-Fontanine–Kim–Maulik have developed the
theory of epsilon-stable quasimap invariants. They are conjecturally equivalent to the Gromov–Witten invariants of X
via explicit wall-crossing formulae, which have been proved in many cases, including targets with good torus action
and complete intersections in a product of projective spaces.
In this talk, we will give a proof for all targets in all genera. The main ingredient is the construction of some moduli space
with C^* action whose fixed-point loci precisely correspond to the terms in the wall-crossing formulae.
Room G02 Zili Zhang(University of Michigan) Title: P=W, a strange identity for Dynkin diagrams
Abstract: Start with a compact Riemann surface X with marked points and a complex reductive group G. According to Hitchin-Simpson's nonabelian Hodge theory, the pair (X,G) comes with two new complex varieties: the character variety M_B and the Higgs moduli M_D. I will present some aspects of this story and discuss an identity P=W indexed by affine Dynkin diagrams – occurring in the singular cohomology groups of M_D and M_B, where P and W dwell. Based on joint work with
Junliang Shen.
5/6/2019 Dennis Borisov (CMSA)
Title: Global shifted potentials for -2-shifted symplectic structures
Abstract: I will explain the notion of shifted symplectic structures due to Pantev, Toen, Vaquie and Vaquie, and then show that a derived scheme with a -2-shifted symplectic structure can be realized as critical locus of a globally defined -1-shifted potential.
Joint work with Artan Sheshmani
For a listing of previous Mathematical Physics Seminars, please click here.
Seminars,Uncategorized | CommonCrawl |
Increased toll-like receptors and p53 levels regulate apoptosis and angiogenesis in non-muscle invasive bladder cancer: mechanism of action of P-MAPA biological response modifier
Patrick Vianna Garcia1,
Fábio Rodrigues Ferreira Seiva2,
Amanda Pocol Carniato1,
Wilson de Mello Júnior3,
Nelson Duran4,5,
Alda Maria Macedo4,
Alexandre Gabarra de Oliveira6,7,
Rok Romih8,
Iseu da Silva Nunes4,
Odilon da Silva Nunes4 &
Wagner José Fávaro1,4,5
The new modalities for treating patients with non-muscle invasive bladder cancer (NMIBC) for whom BCG (Bacillus Calmette-Guerin) has failed or is contraindicated are recently increasing due to the development of new drugs. Although agents like mitomycin C and BCG are routinely used, there is a need for more potent and/or less-toxic agents. In this scenario, a new perspective is represented by P-MAPA (Protein Aggregate Magnesium-Ammonium Phospholinoleate-Palmitoleate Anhydride), developed by Farmabrasilis (non-profit research network). This study detailed and characterized the mechanisms of action of P-MAPA based on activation of mediators of Toll-like Receptors (TLRs) 2 and 4 signaling pathways and p53 in regulating angiogenesis and apoptosis in an animal model of NMIBC, as well as, compared these mechanisms with BCG treatment.
Our results demonstrated the activation of the immune system by BCG (MyD88-dependent pathway) resulted in increased inflammatory cytokines. However, P-MAPA intravesical immunotherapy led to distinct activation of TLRs 2 and 4-mediated innate immune system, resulting in increased interferons signaling pathway (TRIF-dependent pathway), which was more effective in the NMIBC treatment. Interferon signaling pathway activation induced by P-MAPA led to increase of iNOS protein levels, resulting in apoptosis and histopathological recovery. Additionally, P-MAPA immunotherapy increased wild-type p53 protein levels. The increased wild-type p53 protein levels were fundamental to NO-induced apoptosis and the up-regulation of BAX. Furthermore, interferon signaling pathway induction and increased p53 protein levels by P-MAPA led to important antitumor effects, not only suppressing abnormal cell proliferation, but also by preventing continuous expansion of tumor mass through suppression of angiogenesis, which was characterized by decreased VEGF and increased endostatin protein levels.
Thus, P-MAPA immunotherapy could be considered an important therapeutic strategy for NMIBC, as well as, opens a new perspective for treatment of patients that are refractory or resistant to BCG intravesical therapy.
Bladder cancer (BC) is the fourth most incidence tumor in men and the ninth in women, showing high morbidity and mortality rates [1, 2]. More than 70 % of BC is superficial (non-muscle invasive bladder cancer) and classified into 3 stages: pTis (flat carcinoma in situ), pTa (papillary carcinoma non-invasive) and pT1 (tumor invading mucosa or submucosa of the bladder wall) [3, 4]. Despite the prognosis associated with non-muscle invasive bladder tumours, almost 50 % of patients will experience recurrence of their disease within 4 years of their initial diagnosis, and 11 % will progress to muscle invasive disease [3].
The primary treatment for high-grade NMIBC is based on surgery by transurethral resection of bladder tumor (TURBT), followed by intravesical immunotherapy with Bacillus Calmette–Guerin (BCG) [5]. The response induced by BCG reflects induction of a T-helper type-1 (Th1) response to prevent recurrence and to reduce tumor progression [5–7]. However, BCG therapy shows several undesirable effects that are observed up to 90 % of patients, such as fever, chills, fatigue, irritative symptoms, haematuria and until major complications as sepsis and death [8, 9].
Based on this background, compounds activating the immune system, including vaccines, biological response modifiers and tumor environment modulators are, considered potential candidates for the development of new NMBIC treatments aiming to obtain greater therapeutic effect combined with lower toxicity. Toll-like receptors (TLRs) agonist compounds may represent a potential antitumor therapeutic approach, as these receptors are implicated in the pathogenesis of some tumors, including NMIBC [10–12]. TLRs play key roles in innate immunity and their activation can trigger two different responses in tumors: they stimulate immune system to attack tumor cells and/or eliminate the inhibitory machinery to the immune system [13–15]. TLRs signaling consist of two pathways: MyD88-dependent (canonical) and TRIF-dependent (non-canonical) pathways [13–15]. Except for TLR3, the MyD88-dependent pathway activates NF-kB and MAPK, resulting in inflammatory cytokines release, such as Tumor Necrosis Factor α (TNF-α) and interleukin-6 (IL-6) [13, 14]. Conversely, the TRIF-dependent pathway activates Interferon Regulatory Factor 3 (IRF-3) for the production of interferon [13–15]. TLR4 is the only receptor that uses the four adapter molecules (MyD88, TRIF, TRAM and TIRAP) in a signal cascade [13–15].
Most TLRs genes respond to p53 via canonical as well as non-canonical promoter binding sites [16]. The p53 protein is responsible for cell cycle regulation, and it acts as tumor suppressor [16, 17]. Studies of response element promoter sequences targeted by p53 suggest a general role for p53 as a regulator of DNA damage and as a control of TLRs gene expression [16]. Furthermore, several studies suggested that antiangiogenic therapy is sensitive to p53 status in tumors, indicating an important role of p53 in the regulation of angiogenesis [18, 19].
Angiogenesis plays a fundamental role in initiation and progression in different tumors [20]. The vascular endothelial growth factor (VEGF) stimulates all aspects of endothelial function such as: proliferation, migration, production of nitric oxide (NO) and endothelial cell layer permeability [18, 20–22]. The angiogenesis inhibitors have been developed to target endothelial cells and blocking tumor blood supply [18, 23]. Endostatin is a potent endogenous inhibitor of angiogenesis and induces apoptosis in both endothelial cells and tumor cells [18, 19, 24].
Immunotherapy using compounds that act as TLR agonists could be a valuable approach for cancer treatment, whether used alone or in combination with existing therapies. Protein aggregate magnesium-ammonium phospholinoleate-palmitoleate anhydride (P-MAPA) a biopolymer isolated in the 70′s [25] and characterized in the years 90′s [26–28] currently under development by Farmabrasilis (a nonprofit research network) [29], has emerged as a potential candidate for intravesical therapy for NMIBC. P-MAPA is a biological response modifier obtained by fermentation from Aspergillus oryzae that demonstrates important antitumor effect in several animal models of cancer, including NMIBC [11, 12, 26–28]. Recent studies of our research group demonstrated that P-MAPA modulates TLR 2 and 4 in both infectious diseases and cancer [11, 12, 30].
The strategy of research and development of the drug P-MAPA is based in the concept of open source model, with the researchers linked by a virtual research network [29]. A complementary strategy adopted by Farmabrasilis aims to booster the production of data to accelerate the development of the compound as drug candidate for cancer, including NMIBC, involves the selection of compounds already in clinical use, and when available, compounds equally able to act together with P-MAPA, such as BCG, used in parallel or in conjunction with experiments in vivo. The use of immunomodulatory compounds already known against NMIBC with mechanisms of action partially elucidated, such as BCG, in comparative studies with P-MAPA using the same animal model, may facilitate the visualization of commonalities, as well as the differences in the mechanisms of action. Of note, these data may also be relevant to understand the mode of action of P-MAPA, aiming the elaboration of new strategies focusing the future use of the compound for treatment of some conditions that emerge in the treatment of NMIBC, such as BCG refractory and BCG relapsing diseases.
Thus, this study presents the first comprehensive view of the mechanisms of a potential therapeutic agent for NMIBC, P-MAPA biological response modifier, based on activation of mediators of TLRs 2, 4 and p53 signaling pathways in regulating the angiogenesis and apoptosis processes.
NMIBC induction and treatment
Forty female Fischer 344 rats, all 7 weeks old, were obtained from the Multidisciplinary Center for Biological Investigation (CEMIB) at University of Campinas (UNICAMP). For the experiments the protocol followed strictly the ethical principles in animal research (CEUA/IB/UNICAMP–protocol number: 2684-1). Before each intravesical catheterisation via a 22-gauge angiocatheter treatments, animals were anesthetized with 10 % ketamine (60 mg/kg, i.m.; Ceva Animal Health Ltda, São Paulo, Brazil) and 2 % xylazine (5 mg/kg, i.m.; Ceva Animal Health Ltda, São Paulo, Brazil). The animals remained anesthetized for approximately 45 min after catheterization to prevent spontaneous micturition. Ten control animals (CONTROL group) received 0.30 ml of 0.9 % physiological saline every other week for 14 weeks. Thirty animals received 1.5 mg/Kg of n-methyl-n-nitrosourea (MNU) dissolved in 0.30 mL of sodium citrate (1 M pH 6.0); each intravesically every other week for 8 weeks [11, 12]. Two weeks after the last dose of MNU, all animals were submitted to retrograde cystography and ultrasonography to evaluate the occurrence of tumor. Both negative and positive contrast cystography enabled the bladder wall, mucosal margin and lumen to be visualised. For positive or negative contrast cystographies, animals were submitted to intravesical catheterisation via a 22-gauge angiocatheter to drain all the urine from the bladder, instilled 0,3 mL of positive contrast medium or 0,3 mL of air (negative contrast) into the bladder until becomes slightly turgid (judged by palpation of the bladder through the abdominal wall) and taken lateral and ventrodorsal radiographs.
The ultrasounds were evaluated using a portable, software-controlled ultrasound system with a 10–5 MHz 38-mm linear array transducer.
The animals from CONTROL group showed no mass infiltrating the bladder walls, as well as, there were no vesicoureteral reflux and neither bladder filling defect (Fig. 1a, b, c and d).
a–h Retrograde cystography and ultrasonography from CONTROL (a, b, c, d) and MNU (e, f, g, h) groups. Cystography without contrast (a), negative (b) and positive (c) contrast cystographies, and ultrasounds (d) showed no mass infiltrating the bladder walls, as well as, there were no vesicoureteral reflux and neither bladder filling defect. Cystography without contrast (e) and negative contrast cystography (f) showed a mass infiltrating the ventral, dorsal and cranial bladder walls (asterisks). Positive contrast cystography (g) demonstrated several bladder filling defects and vesicoureteral reflux unilateral (arrows). Ultrasound showed tumor (asterisk) infiltrating the bladder walls, tumor size: 1–3,9 mm, 2–5,5 mm
Negative contrast cystography and ultrasonography of urinary bladder from MNU group showed a mass (average tumor size 3,5 × 5,1 mm) infiltrating the ventral, dorsal and cranial bladder walls (Fig. 1e, f and h). Positive contrast cystography demonstrated several bladder filling defects and vesicoureteral reflux unilateral (Fig. 1g) in 80 % of animals and bilateral in 10 % of animals.
MNU treated animals were further divided into three groups (ten animals per group): the MNU group received 0.30 ml of 0.9 % physiological saline; the MNU-BCG group received 106 CFU (40 mg) of BCG (Fundação Ataulpho de Paiva, Rio de Janeiro, RJ, Brazil); the MNU-P-MAPA group received 5 mg/kg dose of P-MAPA (Farmabrasilis, Campinas, SP, Brazil). All animals were treated every other week for 6 weeks. After the treatment, the animals were euthanized and their urinary bladder were collected and processed for histopathological, immunological and Western Blotting analysis.
Histopathological analysis
Samples of urinary bladders were used (n = 5) of each group and fixed in Bouin solution for 12 h. Then, after the fixation, the fragments were washed in 70 % ethanol, and dehydrated in an ascending series of alcohols. Subsequently, the fragments were diaphanized in xylene for 2 h and embedded in the plastic polymer (Paraplast Plus, ST. Louis, MO, USA). Subsequently, the samples were cut on a rotary microtome Slee CUT5062 RM 2165 (Slee Mainz, Mainz, Germany), 5 μm thick, stained with hematoxylin-eosin and photographed with a Leica DM2500 photomicroscope (Leica, Munich, Germany). A senior uropathologist analyzed the urinary bladder lesions according to Health/World International Society of Urological Pathology Organization [4].
Immunohistochemistry of toll-like receptor signaling pathway: (TLR2, TLR4, MyD88, IRF-3, IKK-α, BAX, NF-kB, iNOS, TNF-α, TRIF, IFN-γ, IL-6) and proliferation (Ki-67) in NMIBC
The same samples as for histopathological analysis were used for immunolabelings. They were cut into 6 μm thick sections and antigen retrieval was performed either by different protocols. Following that, the sections were incubated in 0.3 % H2O2 to block endogenous peroxidase, and nonspecific binding was blocked by incubating the sections in blocking solution at room temperature. The primary antibodies were: rabbit polyclonal anti-TLR2 (251110, Abbiotec, San Diego, USA; 1:100), rabbit polyclonal anti-TLR4 (251111, Abbiotec, San Diego, USA; 1:100), rabbit polyclonal anti-MyD88 (ab2064; 1:75), rabbit polyclonal anti-IRF-3 (ab25950; 1:150), rabbit polyclonal anti-IKK-α (ab38515; 1:100), rabbit polyclonal anti-BAX (ab7977; 1:50), rabbit polyclonal anti-NF-kB (ab7970; 1:200), rabbit polyclonal anti-iNOS (ab15323; 1:75), rabbit polyclonal anti-TNF-α (ab6671; 1:150), rabbit polyclonal anti-TRIF (ab13810; 1:100), rabbit polyclonal anti-IL-6 (ab6672; all the above from Abcam, USA), mouse monoclonal anti-IFN-γ (507802, Biolegend, USA;1:50) and mouse monoclonal anti- Ki-67 (NCL-Ki67-MM1, Novocastra; Newcastle, United Kingdom; 1:50). Antibodies were diluted in 1 % BSA and applied to the sections overnight at 4 °C. Bound antibodies were detected with an AdvanceTM HRP kit (Dako Cytomation Inc., USA). Sections were lightly counterstained with Harris' hematoxylin and photographed with a photomicroscope (DM2500 Leica, Munich, Germany).
The immunohistochemistries were measured in five animals in each experimental group, the same samples as for histopathological analysis. Ten microscopic fields per animal were measured with 40·objective lens and corresponded to a total area of 92,500.8 μm2. TLR2, TLR4, MyD88, IRF-3, IKK-α, BAX, NF-kB, iNOS, TNF-α, TRIF, IFN-γ, IL-6 antibodies were scored semiquantitatively by recording percentage of only urothelial cells. At least 1,000 urothelial cells, for each group (200 urothelial cells per animal), were counted by the software LAS V 3.7 (Leica, Munich, Germany) while the examiner classified them as positive or negative cells. Thus, the percentage of labeled cells (PLC) was determined, according to the following equation:
$$ \mathrm{P}\mathrm{L}\mathrm{C}=\mathrm{number}\ \mathrm{of}\ \mathrm{labelled}\ \mathrm{cells}/\mathrm{total}\ \mathrm{counted}\ \mathrm{cells}\times 100\hbox{--} \mathrm{expressed}\ \mathrm{in}:\% $$
The PLC values were categorized into four scores as follows: 0, no immunoreactivity; 1, 1–35 % positive urothelial cells; 2, 36–70 % positive urothelial cells; 3, > 70 % positive urothelial cells. The software LAS V 3.7 (Leica, Munich, Germany) was used to quantify the intensity of brownish-color immunostaining. For each antibody, the same photomicrographs used for determining the PLC were considered. Ten randomized labeled nuclear and/or cytoplasmic regions from different urothelial cells were indicated, with the same-sized square (software LAS V 3.7). The average optical density (OD) of these areas was automatically calculated and represents the average of red, green, and blue color composition (RGB) per area of nucleus and/or cytoplasm analyzed, expressed in optical units per micrometer squared (ou/μm2). The same procedure was applied to obtain the background optical density (BOD) from an area without tissue or vascular space for each photomicrograph. A single area was enough, since the background was constant in each photomicrograph. The absolute white colour that corresponds to the maximum optical density (MaxOD) was composed by the totality of red, green, and blue; and black was the absence of these colors. Therefore, the optical density values calculated by the software make up a decreasing scale in which the high values correspond to the colours that are visually clear.
The equation below was used to calculate the digital immunostaining intensity (ITIdig) for each antibody, whose values make up an increasing scale, equalized by the BOD, proportionally to the optical density of absolute white:
$$ \mathbf{IT}{\mathbf{I}}_{\mathbf{dig}}={\mathbf{M}}_{\mathbf{ax}}\mathbf{O}\mathbf{D}-{\mathbf{M}}_{\mathbf{ax}}\mathbf{O}\mathbf{D}\times \mathbf{\sum}\mathbf{O}\mathbf{D}/\mathbf{\sum}\mathbf{BOD}\hbox{--} \mathbf{expressed}\ \mathbf{in}:\mathbf{o}\mathbf{u}/\boldsymbol{\upmu} {\mathbf{m}}^{\mathbf{2}} $$
The intensity of reactivity was recorded as: weak (1+, ITIdig average = 49.3 μm2), moderate (2+, ITIdig average = 71.3 μm2) and intense (3+, ITIdig average = 95.1 μm2).
Western blotting analysis of toll-like receptor signaling pathway and angiogenesis: TLR2, MyD88, IKK-α, NF-kB, TNF-α, IL-6, TLR4, TRIF, IRF-3, IFN-γ, iNOS, p53, vascular endothelial growth factor (VEGF), endostatin BAX and nod-like receptor 5 (NLRC5) in NMIBC
Samples of the urinary bladders were used (n = 5) of each group, weighed (average 200 mg) and homogenized in 50 μl/mg of RIPA lysis buffer (EMD Millipore Corporation, Billerica, MA, USA). Aliquots containing 70 μg of protein were separated by SDS-PAGE on 10 % or 12 % polyacrylamide gels under reducing conditions. After electrophoresis, the proteins were transferred to Hybond-ECL nitrocellulose membranes (Amersham, Pharmacia Biotech, Arlington Heights, IL., USA). The membranes were blocked with TBS-T containing 1 % BSA (bovine serum albumin) and incubated overnight at 4 °C with with primary rabbit polyclonal anti-TLR2 (ab13855; abcam, USA) polyclonal rabbit anti-MyD88 (ab2064; abcam, USA), polyclonal rabbit anti-IKK-α (ab38515; abcam, USA), polyclonal rabbit anti-NF-kB (ab7970; abcam, USA), polyclonal rabbit anti-TNF-α (ab6671; abcam, USA), polyclonal rabbit anti-IL-6 (ab6672; abcam, USA), mouse monoclonal anti-TLR4 (ab30667; abcam, USA), polyclonal rabbit anti-TRIF (ab13810; abcam, USA), polyclonal rabbit anti-IRF-3 (ab25950; abcam, USA), mouse monoclonal anti-IFN-γ (507802; Biolegend, USA), polyclonal rabbit anti-iNOS (ab15323; abcam, USA), mouse monoclonal anti-p53 (ab26; abcam, USA), monoclonal mouse anti-VEGF (sc-53462; Santa Cruz Biotechnology, USA), monoclonal mouse anti-Endostatin (ab64569; abcam, USA), polyclonal rabbit anti-BAX (ab7977; abcam, USA), polyclonal rabbit anti-NLRC5 (ab105411; abcam, USA) for diluted in 1 % BSA. The membranes were then incubated for 2 h with rabbit or mouse secondary HRP-conjugated antibodies (diluted 1:3,000 in 1 % BSA; Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA). Peroxidase activity was detected by incubation with a diaminobenzidine chromogen (Sigma Chemical Co., St Louis, USA). Western blots were run in duplicate, and urinary bladder samples were pooled from 5 animals per group for each repetition. The semi-quantitative densitometry (IOD – Integrated Optical Density) analysis of bands was conducted using NIH ImageJ 1.47v software (National Institute of Health, USA. Available in: http://rsb.info.nih.gov/ij/), followed by statistical analysis. β-actin was used as endogenous positive controls for standardization of the readings of band staining intensity. The results were expressed as the mean ± standard deviation of the ratio of each band's intensity to β-actin band intensity [12].
Determination of the proliferative index
Samples of the urinary bladders were randomly collected from 5 animals in each group, the same used for Ki-67 immunodetection and histopathology, and used for determination of the proliferative index. Ten fields were taken at random and measured per animal, resulting in 50 fields per group with an × 40 objective lens and the total number of Ki-67 staining positive cells was expressed as the percentage of these total cells, including luminal and basal epithelial cells. Sections were lightly counterstained with methyl green.
Detection of apoptosis and determination of the apoptotic index
Samples of the urinary bladders from five animals in each group, the same used for immunodetection and histopathology, were processed for DNA fragmentation (TUNEL) by means of Terminal Deoxynucleotidyl Transferase (TdT), using the Kit FragEL™ DNA (Calbiochem, La Jolla, CA, USA). The apoptotic nuclei were identified using a diaminobenzidine chromogen mixture (Kit FragEL™ DNA). Ten microscopic fields were randomly taken and analyzed per sample, resulting in 50 fields per group, using a Leica DM2500 (Leica, Munich, Germany) photomicroscope with a × 40 objective. Sections were lightly counterstained with methyl green. The apoptotic index was determined by dividing the number of apoptotic nuclei by the total number of nuclei found in the microscope field.
Western Blotting, proliferative and apoptotic indexes and proliferation/apoptotic ratio (P/A) were statistically compared among the groups by analysis of variance followed by the Turkey's test with the level of significance set at 1 %. Results were expressed as the mean ± standard deviation. Histopathological analyses were evaluated by proportion test. The difference between the two proportions was tested using test of proportion. For all analyses, a type-I error of 5 % was considered statistically significant.
Taking in account these present available data, the mechanism of action of P-MAPA was clearly distinct in relation to BCG. These important findings are relevant concerning the treatment of patients with NMIBC presenting high risk of progression that are refractory or resistant to intravesical therapy with BCG.
P-MAPA reverses the histopathological changes induced by MNU
The urinary tract from the CONTROL group showed no microscopic changes (Fig. 2a, b and c; Additional file 1: Table S1). The normal urothelium was composed of three layers: a basal cell layer, an intermediate cell layer, and a superficial layer composed of umbrella cells (Fig. 2a, b, c).
a–l Photomicrographs of the urinary bladder from CONTROL (a, b, c), MNU (d, e, f), MNU-BCG (g, h, i) and MNU-P-MAPA (j, k, l) groups. a, b, c, j and k Normal urothelium composed of 2–3 layers: a basal cell layer (arrowhead), an intermediate cell layer (arrow), and a superficial or apical layer composed of umbrella cells (open arrowhead). d, e and f pT1: neoplastic cells arranged in small groups (arrows) invading the lamina propria; keratinizing squamous metaplasia (Sm). g, h and i pTa characterized by fibrovascular stalk and frequent papillary branching with increased cellular size. l Papillary hyperplasia. a–l Lp lamina propria, M muscular layer, Ur urothelium
In contrast, the urinary bladders from the MNU group showed histopathological changes such as tumor invading mucosa or submucosa of the bladder wall (pT1) (Fig. 2d, e and f), papillary carcinoma non-invasive (pTa) and flat carcinoma in situ (pTis) in 40, 40 and 20 % of the animals, respectively (Additional file 1: Table S1). The keratinizing squamous metaplasia was found in 60 % of the animals (Fig. 2d and e).
The most frequent histopathological changes in the urinary bladder from the MNU-BCG group were pTa (Fig. 2g, h and i; Additional file 1: Table S1) low-grade intraurothelial neoplasia and papillary hyperplasia in 40, 40 and 20 % of the animals, respectively (Additional file 1: Table S1).
The microscopic features of the urinary bladders from the MNU-P-MAPA group were similar to those found in the CONTROL group (Fig. 2j, k and l). Normal urothelium was found in 60 % of the animals (Fig. 2j and k; Additional file 1: Table S1). The histopathological changes in the MNU-P-MAPA group were flat hyperplasia (20 %) and papillary hyperplasia (20 %) (Fig. 2l; Additional file 1: Table S1).
Urinary calculi and macroscopic haematuria were only observed in the MNU and MNU-BCG groups; they were absent in the MNU-P-MAPA group.
BCG activates MyD88-dependent pathway
The highest TLR2 protein levels were found in the MNU-P-MAPA group as compared to the CONTROL, MNU-BCG and MNU groups, showing intense immunoreactivities in the urothelium (Figs. 3a, g, m, s and 4; Additional file 2: Table S2).
Immunolabelled antigen intensities of the urinary bladder from the CONTROL (a, b, c, d, e, f), MNU (g, h, i, j, k, l), MNU-BCG (m, n, o, p, q, r), and MNU-P-MAPA (s, t, u, v, w, x) groups. TLR2 immunoreactivities (asterisks) were moderate in the urothelium from the CONTROL (a) group, weak in the MNU (g) group and intense in the MNU-BCG (m) and MNU-P-MAPA (s) groups. MyD88 immunoreactivities (asterisks) were moderate in the urothelium from the CONTROL (b) group, weak in the MNU (h) group and intense in the MNU-BCG (n) and MNU-P-MAPA (t) groups. IKK-α immunoreactivities (arrows) were weak in the urothelium from the CONTROL (c) group, moderate in the MNU (i) group, intense in the MNU-BCG group (o) and weak in the MNU-P-MAPA (u) group. NF-kB immunoreactivities (arrows) were weak in the cytoplasm of the urothelial cells from the CONTROL (d) group, intense in the nucleus and cytoplasm of the urothelial cells from the MNU (j) group, moderate in the nucleus and cytoplasm of the urothelial cells from the MNU-BCG (p) group and weak in the cytoplasm of the urothelial cells from the MNU-P-MAPA (v) group. TNF-α immunoreactivities (asterisks) were weak in the urothelium from the CONTROL (e) group, intense in the MNU (k) and MNU-BCG (q) groups and weak in the MNU-P-MAPA (w) group. IL-6 immunoreactivities (asterisks) were weak in the urothelium from the CONTROL (f) group, intense in the MNU (l) and MNU-BCG (r) groups and weak in the MNU-P-MAPA (x) group. a–x Ur urothelium
Representative Western Blotting and semiquantitative determination for TLR2, MyD88, IKK-α, NF-kB, TNF-α and IL-6 protein levels. Samples of urinary bladder were pooled from five animals per group for each repetition (duplicate) and used for semi-quantitative densitometry (IOD – Integrated Optical Density) analysis of the TLR2, MyD88, IKK-α, NF-kB, TNF-α and IL-6 levels following normalization to the β-actin. All data were expressed as the mean ± standard deviation. Different lowercase letters (a, b, c, d) indicate significant differences (p <0.01) between the groups after Tukey's test
The highest MyD88 protein levels were found in the MNU-BCG and MNU-P-MAPA groups as compared to the other experimental groups. These groups showed intense immunoreactivities in the urothelium (Figs. 3b, h, n, t and 4; Additional file 2: Table S2). However, MyD88 levels were significantly higher in the CONTROL group than in the MNU group; these groups exhibited moderate and weak immunoreactivities, respectively (Figs. 3b, h, n, t and 4; Additional file 2: Table S2).
IKK-α protein levels were significantly higher in the MNU-BCG group in relation to the MNU, MNU-P-MAPA and CONTROL groups, which showed intense, moderate, weak and weak immunoreactivities in the urothelium, respectively (Figs. 3c, i, o, u and 4; Additional file 2: Table S2).
The highest NF-kB protein levels were found in the MNU group as compared to the MNU-BCG, CONTROL and MNU-P-MAPA groups (Fig. 4). The NF-kB immunoreactivities were weak in the cytoplasm of the urothelial cells from the CONTROL group, intense in both nucleus and cytoplasm of the urothelial cells from the MNU group, moderate in both nucleus and cytoplasm of the urothelial cells from the MNU-BCG group, and weak in the cytoplasm of the urothelial cells from the MNU-P-MAPA group (Figs. 3d, j, p and v; Additional file 2: Table S2).
TNF-α protein levels were significantly higher in the MNU-BCG group than in all other experimental groups, exhibiting intense immunoreactivities in the urothelium (Figs. 3e, k, q, w and 4; Additional file 2: Table S2). However, these levels were significantly higher in the MNU-P-MAPA and MNU groups in relation to the CONTROL group, which showed weak, intense and weak immunoreactivities, respectively (Fig. 3e, k, q, w and 4; Additional file 2: Table S2).
IL-6 protein levels were significantly higher in the MNU-BCG and MNU groups in relation to the MNU-P-MAPA and CONTROL groups. These groups displayed intense, intense, weak and weak immunoreactivities in the urothelium, respectively (Figs. 3f, l, r, x and 4; Additional file 2: Table S2).
P-MAPA intravesical immunotherapy activates interferon signaling pathway and increases iNOS levels
TLR4 protein levels were significantly higher in the MNU-P-MAPA group in relation to the other experimental groups. This group exhibited intense immunoreactivities in the urothelium (Figs. 5a, g, m, s and 6; Additional file 2: Table S2). However, these levels were significantly higher in the CONTROL and MNU-BCG groups than in the MNU group. The three latter groups showed moderate, intense and weak immunoreactivities, respectively (Figs. 5a, g, m, s and 6; Additional file 2: Table S2).
Immunolabelled antigen intensities of the urinary bladder from the CONTROL (a, b, c, d, e, f), MNU (g, h, i, j, k, l), MNU-BCG (m, n, o, p, q, r), and MNU-P-MAPA (s, t, u, v, w, x) groups. TLR4 immunoreactivities (asterisks) were moderate in the urothelium from the CONTROL group (a), weak in the MNU group (g) and intense in the MNU-BCG (m) and MNU-P-MAPA (s) groups. TRIF immunoreactivities (asterisks) were weak in the urothelium from the CONTROL (b) and MNU (h) groups, moderate in the MNU-BCG (n) group and intense in the MNU-P-MAPA (t) group. IRF-3 immunoreactivities (arrows) were weak in the urothelium from the CONTROL (c) and MNU (i) groups, moderate in the MNU-BCG (o) group and intense in the MNU-P-MAPA (u) group. IFN-γ immunoreactivities (arrows) were weak in the urothelium from the CONTROL (d) and MNU (j) groups, moderate in the MNU-BCG (p) group and intense in the MNU-P-MAPA (v) group. iNOS immunoreactivities (asterisks) were weak in the urothelium from the CONTROL (e) and MNU (k) groups, moderate in the MNU-BCG (q) group and intense in the MNU-P-MAPA (w) group. BAX immunoreactivities (asterisks) were weak in the urothelium from the CONTROL (f) group, moderate in the MNU (l) and MNU-BCG (r) groups and intense in the MNU-P-MAPA (x) group. a–x Ur urothelium
Representative Western Blotting and semiquantitative determination for TLR4, TRIF, IRF-3, IFN-γ, iNOS, and p53 protein levels. Samples of urinary bladder were pooled from five animals per group for each repetition (duplicate) and used for semi-quantitative densitometry (IOD – Integrated Optical Density) analysis of the TLR4, TRIF, IRF-3, IFN-γ, iNOS, and p53 levels following normalization to the β-actin. All data were expressed as the mean ± standard deviation. Different lowercase letters (a, b, c, d) indicate significant differences (p <0.01) between the groups after Tukey's test
TRIF protein levels were significantly higher in the MNU-P-MAPA group in relation to the other experimental groups, which showed intense immunoreactivities in the urothelium (Figs. 4b, h, n, t and 6; Additional file 2: Table S2). However, TRIF levels were higher in the MNU-BCG and MNU groups than in the CONTROL group. The three latter groups exhibited moderate, weak and weak immunoreactivities respectively (Figs. 5b, h, n, t and 6; Additional file 2: Table S2).
Protein levels for IRF-3 were significantly higher in the MNU-BCG and MNU-P-MAPA groups in relation to the CONTROL and MNU groups. These groups showed moderate, intense, weak and weak immunoreactivities in the urothelium, respectively (Figs. 5c, i, o, u and 6; Additional file 2: Table S2).
The highest IFN-γ protein levels were found in the MNU-P-MAPA group compared to the MNU-BCG, MNU and CONTROL groups. These groups exhibited intense, moderate, weak and weak immunoreactivities in the urothelium, respectively (Figs. 5d, j, p, v and 6; Additional file 2: Table S2).
iNOS protein levels were significantly higher in the MNU-P-MAPA and MNU-BCG groups than in the MNU and CONTROL groups. These groups showed intense, moderate, weak and weak immunoreactivities in the urothelium, respectively (Figs. 5e, k, q, w and 6; Additional file 2: Table S2).
NLRC5 protein levels were significantly higher in the MNU-P-MAPA group in relation to the other experimental groups (Fig. 7). Furthermore, these levels were significantly higher in the CONTROL and MNU-BCG groups than in the MNU group (Fig. 7).
Representative Western Blotting and semiquantitative determination for VEGF, Endostatin, BAX and NLRC5 protein levels. Samples of urinary bladder were pooled from five animals per group for each repetition (duplicate) and used for semi-quantitative densitometry (IOD – Integrated Optical Density) analysis of the VEGF, Endostatin, BAX and NLRC5 levels following normalization to the β-actin. All data were expressed as the mean ± standard deviation. Different lowercase letters (a, b, c, d) indicate significant differences (p <0.01) between the groups after Tukey's test
P-MAPA immunotherapy increases wild-type p53 protein levels, decreases proliferation and increases apoptosis
p53 protein levels were significantly higher in the MNU-P-MAPA and CONTROL groups in relation to the other experimental groups (Fig. 6). Furthermore, these levels were significantly higher in the MNU-BCG group in comparison to the MNU group (Fig. 6).
The apoptotic index revealed different kinetics for cell death for each treatment (Additional file 3: Figures S1a, S1c, S1e, S1g; Fig. 8). This index was significantly higher in the animals from the MNU-P-MAPA group in relation to the other experimental groups. The MNU and MNU-BCG groups, in turn, showed significantly higher average values of the apoptotic index than the CONTROL group (Additional file 3: Figures S1a, S1c, S1e, S1g; Fig. 8). BAX protein levels were significantly higher in the MNU-P-MAPA group compared to the MNU, MNU-BCG and CONTROL groups. The groups exhibited intense, moderate, moderate and weak immunoreactivities in the urothelium, respectively (Figs. 3f, l, r, x and 6; Additional file 2: Table S2).
Percentage of Proliferative (Ki-67) and Apoptotic Indexes
Proliferative activity was significantly increased in animals from the MNU group in relation to the other experimental groups (Additional file 3: Figures S1b, S1d, S1f, S1h; Fig. 8). The MNU-P-MAPA group displayed significantly lower average values of proliferative index than the MNU-BCG group, although these values were significantly higher than those found in the CONTROL group (Additional file 3: Figures S1b, S1d, S1f, S1h; Fig. 8).
Furthermore, the proliferation/apoptotic ratio (P/A) was significantly higher in the MNU and MNU-BCG groups when compared to CONTROL group (Fig. 9). However, the P/A ratio in the MNU-P-MAPA was significantly lower in relation to the other experimental groups, indicating predominance of the apoptotic process (Fig. 9).
Proliferation/Apoptotic Ratio (P/A)
P-MAPA intravesical immunotherapy suppresses angiogenesis
VEGF protein levels were significantly higher in the MNU group in relation to the other experimental groups (Fig. 7). Furthermore, these levels were significantly higher in the MNU-BCG group compared to the MNU-P-MAPA and CONTROL groups (Fig. 7).
Endostatin protein levels were significantly higher in the MNU-P-MAPA and CONTROL groups when compared to the MNU-BCG and MNU groups (Fig. 7).
Although the use of TURBT with adjuvant chemo and immunotherapy represents a clear advance in the treatment of NMIBC, the management of this disease, mainly for high grade tumors remains a challenge, because the high rates of recurrence and progression to muscle invasive and/or metastatic stages. Following episodes of high grade NMIBC recurrence after BCG therapy, several conventional chemotherapy agents have been used including gemcitabine, mitomycin, gemcitabine plus mitomycin, docetaxel and valrubicin. In addition, immunotherapy (Interferon-alpha or Interferon alpha-plus BCG) has also been used [31]. Mycobacterium phlei cell wall-nucleic acid complex (MCNA) has been proposed for intravesical treatment of NMIBC at high risk of recurrence or progression in patients who failed prior BCG immunotherapy (e.g., in patients who are BCG-refractory or BCG relapsing) and are not candidates for or refuse cystectomy [32]. However, none of these drugs had been shown superiority over BCG and remains considered investigational [14]. In the specific case of BCG-refractory CIS, Valrubicin, a semi-syntetic analog of doxorubicin, the only FDA-approved drug for treatment of such condition, shows effectivity in less of 10 % of treated patients at 2 years and none with coincident stage T1 disease [33].
The surgical option for such cases, partial or total cystectomy, is often associated with significant morbidity and mortality. Furthermore, for some patients, cystectomy is not an available option due to the presence of concomitant comorbidities. Consequently novel therapies are highly needed for treatment of high grade NMIBC, to prevent disease progression and to allow bladder preservation and ensure life quality for patients and finally, to provide an option for those that are ineligible for cystectomy.
The P-MAPA Biological Response Modifier, which shows novel therapeutic properties compared to standard treatments, appears a valuable candidate drug in the treatment of NMIBC. In our previous studies we have shown several beneficial properties of P-MAPA [11, 12]. Here, using the NMU animal models for the study of NMIBC, we clearly show that P-MAPA treatment enables better histopathological recovery from the cancer state than no treatment (MNU group) or BCG treatment (MNU-BCG group).
Agonists of TLRs are the subject of intensive research and development for the treatment of cancer, including bladder cancer [11, 12, 33]. TLRs, which are expressed in immune as well as in some epithelial cells, play an important role in activating both innate and adaptive immune responses [33] and [33, 34]. Bladder tumors, especially non-muscle-invasive, show decreased TLRs expression [35, 36]. TLR-mediated BCG immunotherapy for NMIBCs suggests alternative TLR-based immunotherapies might also be successful strategies for this type of cancer. The BCG antitumor effects seem to be related to local immunological mechanisms since after BCG instillation, a transient increase in several cytokines and the presence of activated immunocompetent leukocytes were found in the urine within 24 h [37]. Local lymphocytic infiltration and cytokine production were found in the bladder wall of most patients receiving intravesical BCG and was demonstrated that this local response was highly complex [37–39]. TNF-related-apoptosis-inducing ligand (TRAIL) is released from polymorph nuclear neutrophils (PMNs) via stimulation of TLR2 by BCG [33]. Secretion of interleukin-8, a strong chemoattractant for monocytes and T-cells, is also induced from PMNs by BCG infection via MyD88-dependent TLR2 and TLR4 activation [33, 40] whereas BCG activation of TLR2 and TLR4 induces TNF-α secretion from dendritic cells (DCs) [33, 41, 42].
TNF signaling pathway may induce carcinogenesis by up-regulating NF-kB leading to the up-regulation of other proteins that cause cell proliferation and morphogenesis [40]. Using TNF knockout mice the development of skin carcinomas by chemical carcinogen DMBA (7.12-dimethylbanz[a]-antracene) and tumor promoter TPA (12-0-tetradecanoyl-phorbol-13-acetate) decreased compared to wild type mice [43, 44]. Using pentoxifylline, which was shown to inhibit TNF and IL-1a gene expression, the growth of DMBA/TPA induced papillomas were inhibited [45]. These results suggest a chemical tumor promoter can induce the secretion of TNF-α from different cells types and TNF can act as an endogenous tumor promoter in vivo [46]. TNF-α was identified as the major host-produced factor that enhances the growth of metastases in the lung cancer animal model, in part through activation of NF-kB in the tumor cells [47].
We have demonstrated here that BCG increased TLR2 and TLR4 protein levels in NMIBC model, which corroborated with our previous study [11, 12]. These induces MyD88-dependent pathway as shown by increased MyD88, IKKα and NF-kB protein levels. The induction of MyD88-dependent pathway or canonical pathway increases inflammatory cytokines (IL-6 and TNF-α) protein levels. Accordingly, activation of immune system by BCG treatment, via MyD88-dependent pathway, (Additional file 4: Figure S2a), was essential for histopathological recovery from the cancer state.
TLR4 activation of host macrophages resulted in the production of several different inflammatory cytokines that influenced tumor growth. However, TLR4 signaling also induces cytokines (IFN) that have antitumor effects by induction of TRAIL, a potent inducer of tumor cell death [47]. Shankaran et al. [48] showed the tumorsuppressor function of the immune system to be critically depend on the actions of IFN-γ, which, at least in part, are driven to regule tumor-cell immunogenicity. IFN-γ stimulates several antiproliferative and tumoricidal biochemical pathways in macrophages and in tumor cell lines, as well as has a profound impact on solid tumors growth and metastasis and seemingly plays an early role in protection from metastasis [49–55]. IFN-γ produced by IL-12-activated tumor-infiltrating CD8+T cells directly induced apoptosis of mouse hepatocellular carcinoma cells [52, 53]. The NLRCs, a class of intracellular receptors that respond to pathogen or cellular stress, has recently been identified as a critical regulator of immune responses [56, 57]. While NLRC5 is constitutively and widely expressed, its levels can be dramatically induced by interferons during pathogen infections. Both in vitro and in vivo studies have demonstrated NLRC5 is a specific and master regulator of major histocompatibility complex (MHC) class I genes as well as related genes involved in MHC class I antigen presentation [56, 57].
In this study, we demonstrated TLR2 and TLR4 protein levels were significantly higher in the P-MAPA group in relation to the BCG group in the NMIBC animal model. Also, P-MAPA treatment led to increased TRIF and IRF-3 protein levels, indicating an activation of MyD88-independent pathway (Additional file 4: Figure S2b). The induction of MyD88-independent pathway (non-canonical pathway or TRIF-dependent pathway) by P-MAPA led to increased IFN-γ and iNOS (macrophages type 1 – M1) protein levels. In contrast to BCG treatment, P-MAPA immunotherapy led to distinct activation of innate immune system TLRs 2 and 4-mediated, resulting in increased interferons signaling pathway (Additional file 4: Figure S2b), which was more effective in the NMIBC treatment. Also as result of interferon signaling pathway (IFN-γ and IRF-3) induction by P-MAPA, the proliferation/apoptotic ratio was significantly lower in animals treated with P-MAPA, indicating predominance of the apoptotic process. Accordingly, P-MAPA immunotherapy increased NOD like receptor 5 (NLRC5) protein levels, which were fundamental to induction of interferon signaling pathway (Additional file 4: Figure S2b). Thus, the activation of interferon signaling pathway was more effective in the induction of immunogenic cell death in relation to inflammatory cytokines signaling pathway.
The IFN-γ produced by tumor-infiltrating T cells might play two distinct roles in antitumor activity: activation of antitumor T cells and direct tumoricidal activity by generating inducible nitric oxide synthetase (iNOS) [48, 58]. NO is considered one of the main factors responsible for the macrophage cytotoxic activity against tumor cells [50, 59]. Previous data showing increased NO concentrations in the urinary bladder from patients treated with BCG [59–61], suggests NO as a critical factor in the BCG mediated antitumor effect [56]. NO can stimulate cell growth and cell differentiation when present at low concentrations, whereas high concentrations often result in cytotoxic effects [59]. Tate et al. [50] demonstrated iNOS induction within the renal carcinoma cells (CL-2 and CL-19) in response to IFN-γ caused a robust and sustained accumulation of endogenous NO that resulted in an 80–85 % growth inhibition of CL-2 and CL-19 cell lines. Patients with bladder cancer who had received BCG treatment, iNOS-like immunoreactivity was found in the urothelial cells but also in macrophages in the submucosa [56]. Koskela et al. [59] verified that endogenously formed NO was significantly increased in the BCG treated patients and they had a ten-fold increase in mRNA expression for iNOS compared to healthy controls. In culture supernatant from macrophages stimulated by P-MAPA in both healthy and visceral leishmaniasis, infected dogs NO production was increased [62]. Thus, it can be concluded interferon signaling pathway activation induced by P-MAPA led to increase of iNOS protein levels in the NMIBC animal model, resulting in increased apoptosis process and histopathological recovery (Additional file 4: Figure S2b).
Furthermore, cell death may depend on NO-stimulated signaling pathways leading to gene expression, involving the tumor suppressor p53 [63–65]. Activation of p53 by NO has been observed in many cell types [66, 67]. NO-induced p53 contributes to various cell type-specific biological effects of NO, such as induction of apoptosis, inhibition of proliferation and tumor suppression [66–68]. Besides that, p53 controls a remarkable number of physiologic functions, including energy metabolism, differentiation, and reactive oxygen species production and is stabilized and activated in response to diverse stresses signals, such as DNA damage, hypoxia, oncogene activation, drugs, nucleotide depletion [64]. Cells possessing a fully functional p53 pathway can either arrest and repair damages caused by these untimely stresses or undergo p53-dependent apoptosis. BAX is considered an important target gene required for p53-dependent apoptosis [64]. Induction of p53 by NO is preceded by a rapid decrease in Mdm2 protein, which may enable to elevate p53 levels early after exposure to NO [67]. Wang et al. [67] showed NO promoted p53 nuclear retention and inhibited Mdm2-mediated p53 nuclear export, indicating this effect to be mediated by ATM-dependent phosphorylation of p53 on Serine 15. Also, In conclusion, these findings imply that, through augmenting p53 nuclear retention NO can sensitize tumor cells to p53-dependent apoptosis.
Several studies suggest antiangiogenic therapy is sensitive to p53 status in tumors, implicating a role for p53 in the regulation of angiogenesis [18, 19, 69]. A connection between p53 and tumor angiogenesis was revealed in 1994 when Dameron et al. [69] proposed suppression of angiogenesis by thrombospondin-1 could represent a new mechanism for tumor suppression by p53. Other evidence emerged that wild-type p53 could prevent incipient tumors from becoming angiogenic [70]. Teodoro et al. [19] demonstrated p53-tumor suppression was mediated in part by at least two potent angiogenesis inhibitors, endostatin and tumstatin. In addition, these authors showed ectopic expression of α (II) collagen prolyl-4-hydroxylase in human tumor cells implanted into immunodeficient mice resulted in "near-complete" tumor suppression compared with mice implanted with tumor cells that did not express α (II) collagen prolyl-4-hydroxylase, and associated this results with suppression of tumor angiogenesis by endostatin or tumstatin. Thus, this study demonstrated an important antitumor effect of P-MAPA immunotherapy, based on increase of endostatin protein levels and decrease of VEGF protein levels in the NMIBC animal model. Therefore, interferon signaling pathway induction and increased wild-type p53 protein levels by P-MAPA led to important antitumor effects, not only suppressing abnormal cell proliferation, but also by preventing continuous expansion of tumor mass through suppression of angiogenesis.
BAX, bcl-2-like protein 4; BC, bladder cancer; BCG, Bacillus Calmette-Guerin; BSA, bovine serum albumin; HRP, horseradish peroxidase; IFN-γ, interferon-gamma; IL, interleukin; IL-6, interleukin 6; iNOS, inducible nitric oxide synthetase; IRF-3, interferon regulatory factor 3; MNU, n-methyl-n-nitrosourea; MyD88, myeloid differentiation primary response 88; NF-kB, nuclear factor-kB; NK, natural killer cell; NLRC5, NOD like receptor 5; NMIBC, non-muscle invasive bladder cancer; NO, nitric oxide; P-MAPA, protein aggregate magnesium-ammonium phospholinoleate-palmitoleate anhydride; pT1, tumor confined to the mucosa and submucosa of the bladder; pTa, papillary tumor; pTis, carcinoma in situ; TLR, toll-like receptor; TNF-α, tumor necrosis factor α; TRAF2, TNF receptor-associated factor 2; TRIF, TIR-domain-containing adapter-inducing interferon-β
American Cancer Society. Bladder Cancer Statistics. 2015. http://www.cancer.org/cancer/bladdercancer/detailedguide/bladder-cancer-key-statistics. Accessed at 10 Dec 2015.
Zhang N, Li D, Shao J, Wang X. Animal models for bladder cancer: the model establishment and evaluation. Oncol Lett. 2015;9:1515–19.
Shimada K, Fujii T, Anai S, Fujimoto K, Konishi N. ROS generation via NOX4 and its utility in the cytological diagnosis of urothelial carcinoma of the urinary bladder. BMC Urol. 2011;11:01–12.
Epstein JI, Amin MB, Reuter VR, Mostofi FK. The World Health Organization/International Society of Urological Pathology consensus classification of urothelial (transitional cell) neoplasms of the urinary bladder. Bladder Consensus Conference Committee. Am J Surg Pathol. 1998;22:1435–48.
Askeland EJ, Newton MR, O'Donnell MA, Luo Y. Bladder cancer immunotherapy: BCG and Beyond. Adv Urol. 2012;18:01–12.
Böhle A, Brandau S. Immune mechanisms in bacillus Calmette Guerin Immunotherapy for superficial bladder cancer. J Urol. 2003;170:964–69.
DiPaola RS, Lattime EC. Bacillus Calmette-Guerin mechanism of action: role of immunity, apoptosis, necrosis and autophagy. J Urol. 2007;178:1840–1.
Berry DL, Blumenstein BA, Magyary DL, Lamm DL, Crawford ED. Local toxicity patterns associated with intravesical bacillus Calmette-Guerin: a Southwest Oncology Group study. Int J Urol. 1996;3:98–100.
Herr HW, Milan TN, Dalbagni G. BCG-refractory vs. BCG-relapsing non-muscle-invasive bladder cancer: a prospective cohort outcomes study. Urol Oncol. 2015;33:108.e1–4.
Killeen SD, Wang JH, Andrews EJ, Redmond HP. Exploitation of the Toll like receptor system in cancer: a doubled-edged sword? Br J Cancer. 2006;95:247–52.
Fávaro WJ, Nunes OS, Seiva FR, Nunes IS, Woolhiser LK, Duran N, et al. Effects of P-MAPA immunomodulator on Toll-like receptors and p53: potential therapeutic strategies for infectious diseases and cancer. Infect Agent Cancer. 2012;7:01–15.
Garcia PV, Apolinário LM, Böckelmann PK, da Silva NI, Duran N, Fávaro WJ. Alterations in ubiquitin ligase Siah-2 and its corepressor N-CoR after P-MAPA immunotherapy and anti-androgen therapy: new therapeutic opportunities for non-muscle invasive bladder cancer. Int J Clin Exp Pathol. 2015;8:4427–43.
Akira S, Takeda K. Toll-like receptor signalling. Nat Rev Immunol. 2004;4:499–511.
Takeda K, Akira S. TLR signaling pathways. Semin Immunol. 2004;16:03–9.
Zhao S, Zhang Y, Zhang Q, Wang F, Zhang D. Toll-like receptors and prostate cancer. Front Immunol. 2014;5:352.
Menendez D, Shatz M, Azzam K. The Toll-like receptor gene family is integrated into human DNA damage and p53 networks. Plos Genet. 2011;3:1–15.
Shariat SF, Lotan Y, Karakiewicz PI, Ashfaq R, Isbarn H, Fradet Y, et al. p53 predictive value for pT1-2 N0 disease at radical cystectomy. J Urol. 2009;182:907–13.
Folkman J. Antiangiogenesis in cancer therapy--endostatin and its mechanisms of action. Exp Cell Res. 2006;312:594–607.
Teodoro JG, Parker AE, Zhu X, Green MR. p53-mediated inhibition of angiogenesis through up-regulation of a collagen prolyl hydroxylase. Science. 2006;313:968–71.
Verdegem D, Moens S, Stapor P, Carmeliet P. Endothelial cell metabolism: parallels and divergences with cancer cell metabolism. Cancer Metab. 2014. doi:10.1186/2049-3002-2-19.
Waltenberger J. VEGF resistance as a molecular basis to explain the angiogenesis paradox in diabetes mellitus. Biochem Soc Trans. 2009;37:1167–70.
Zhu W, He S, Li Y, Qiu P, Shu M, Ou Y, et al. Anti-angiogenic activity of triptolide in anaplastic thyroid carcinoma is mediated by targeting vascular endothelial and tumor cells. Vascul Pharmacol. 2010;52:46–54.
Abdollahi A, Lipson KE, Sckell A, Zieher H, Klenke F, Poerschke D, et al. Combined therapy with direct and indirect angiogenesis inhibition results in enhanced antiangiogenic and antitumor effects. Cancer Res. 2003;63:8890–98.
O'Reilly MS, Bohem T, Shing Y, Fukai N, Vasios G, Lane WS, et al. Endostatin: an endogenous inhibitor of angiogenesis and tumor growth. Cell. 1997;88:277–85.
Nunes OS. Desenvolvimento de um novo antibiótico. In: Reunião Anual da Sociedade Brasileira para o Progresso da Ciência, 37, 1985. Belo Horizonte: Anais; 1985. p. 823–4.
Duran N, Nunes OS. Characterization of an aggregated polymer from Penicilium sp. (PB 73 STRAIN). Braz J Med Biol Res. 1990;23:1289–302.
Duran N. SB-73 immunostimulant. Drugs Future. 1993;18:327–34.
Duran N. SB-73/MAPA. Drugs Future. 1997;22:454.
Farmabrasilis. The Farmabrasilis register. http://www.farmabrasilis.org (1987). Accessed 01 Dec 2015.
Melo LM, Perosso J, Almeida BF, Silva KL, Somenzani MA, de Lima VM. Effects of P-MAPA immunomodulator on Toll-like receptor 2, ROS, nitric oxide, MAPKp38 and IKK in PBMC and macrophages from dogs with visceral leishmaniasis. Int Immunopharmacol. 2014;18:373–8.
Lightfoot AJ, Rosevear HM, O'Donnell MA. Recognition and treatment of BCG failure in bladder cancer. Sci World J. 2011;11:602–13.
Morales A, Herr H, Steinberg G, Given R, Cohen Z, Amrhein J, Kamat AM. Efficacy and safety of MCNA in patients with nonmuscle invasive bladder cancer at high risk for recurrence and progression after failed treatment with bacillus Calmette-Guérin. J Urol. 2015;193:1135–43.
Steinberg GD, Smith ND, Ryder K, Strangman NM, Slater SJ. Factors affecting valrubicin response in patients with bacillus Calmette-Guérin-refractory bladder carcinoma in situ. Postgrad Med. 2011;123:28–34.
LaRue H, Ayari C, Bergeron A, Fradet Y. Toll-like receptors in urothelial cells--targets for cancer immunotherapy. Nat Rev Urol. 2013;10:537–45.
Ayari C, Bergeron A, LaRue H, Ménard C, Fradet Y. Toll-like receptors in normal and malignant human bladders. J Urol. 2011;185:1915–21.
Stopiglia RM, Matheus W, Garcia PV, Billis A, Castilho MA, De Jesus VHF, Ferreira U, Fávaro WJ. Molecular assessment of non-muscle invasive and muscle invasive bladder tumors: mapping of putative urothelial stem cells and Toll-Like Receptors (TLR) signaling. J Cancer Ther. 2015;6:129–40.
Yu JS, Peacock JW, Jacobs Jr WR, Frothingham R, Letvin NL, Liao HX, Haynes BF. Recombinant Mycobacterium bovis bacillus Calmette-Guerin elicits human immunodeficiency virus type 1 envelope-specific T lymphocytes at mucosal sites. Clin Vaccine Immunol. 2007;14:886–93.
Boccafoschi C, Montefiore F, Pavesi M, Pastormerlo M, Betta PG. Late effects of intravesical bacillus Calmette-Guérin immunotherapy on bladder mucosa infiltrating lymphocytes: an immunohistochemical study. Eur Urol. 1995;27:334–8.
Sander B, Damm O, Gustafsson B, Andersson U, Håkansson L. Localization of IL-1, IL-2, IL-4, IL-8 and TNF in superficial bladder tumors treated with intravesical bacillus Calmette-Guerin. J Urol. 1996;156:536–41.
Godaly G, Young DB. Mycobacterium bovis bacille Calmette Guerin infection of human neutrophils induces CXCL8 secretion by MyD88-dependent TLR2 and TLR4 activation. Cell Microbiol. 2005;7:591–601.
Tsuji S, Matsumoto M, Takeuchi O, Akira S, Azuma I, Hayashi A, Toyoshima K, Seya T. Maturation of human dendritic cells by cell wall skeleton of Mycobacterium bovis bacillus Calmette-Guérin: involvement of toll-like receptors. Infect Immun. 2000;68:6883–90.
Simons MP, O'Donnell MA, Griffith TS. Role of neutrophils in BCG immunotherapy for bladder cancer. Urol Oncol. 2008;26:341–5.
Waterston AM, Salway F, Andreakos E, Butler DM, Feldmann M, Coombes RC. TNF autovaccination induces self anti-TNF antibodies and inhibits metastasis in a murine melanoma model. Br J Cancer. 2004;90:1279–84.
Suganuma M, Okabe S, Marino MW, Sakai A, Sueoka E, Fujiki H. Essential role of tumor necrosis factor alpha (TNF-alpha) in tumor promotion as revealed by TNF-alpha-deficient mice. Cancer Res. 1999;59:4516–8.
Robertson FM, Ross MS, Tober KL, Long BW, Oberyszyn TM. Inhibition of pro-inflammatory cytokine gene expression and papilloma growth during murine multistage carcinogenesis by pentoxifylline. Carcinogenesis. 1996;17:1719–28.
Komori A, Yatsunami J, Suganuma M, Okabe S, Abe S, Sakai A. Tumor necrosis factor acts as a tumor promoter in BALB/3T3 cell transformation. Cancer Res. 1993;53:1982–5.
Luo JL, Maeda S, Hsu LC, Yagita H, Karin M. Inhibition of NF-kappaB in cancer cells converts inflammation- induced tumor growth mediated by TNFalpha to TRAIL-mediated tumor regression. Cancer Cell. 2004;6:297–305.
Shankaran V, Ikeda H, Bruce AT, White JM, Swanson PE, Old LJ, et al. IFNgamma and lymphocytes prevent primary tumour development and shape tumour immunogenicity. Nature. 2001;410:1107–11.
Alshaker HA, Matalka KZ. IFN-γ, IL-17 and TGF-β involvement in shaping the tumor microenvironment: the significance of modulating such cytokines in treating malignant solid tumors. Cancer Cell Int. 2011. doi:10.1186/1475-2867-11-33.
Tate Jr DJ, Patterson JR, Velasco-Gonzalez C, Carroll EN, Trinh J, Edwards D, et al. Interferon-gamma-induced nitric oxide inhibits the proliferation of murine renal cell carcinoma cells. Int J Biol Sci. 2012;8:1109–20.
Li Z, Pradera F, Kammertoens T, Li B, Liu S, Qin Z. Cross-talk between T cells and innate immune cells is crucial for IFN-gamma-dependent tumor rejection. J Immunol. 2007;179:1568–76.
Komita H, Homma S, Saotome H, Zeniya M, Ohno T, Toda G. Interferon-gamma produced by interleukin-12-activated tumor infiltrating CD8+T cells directly induces apoptosis of mouse hepatocellular carcinoma. J Hepatol. 2006;45:662–72.
Martini M, Testi MG, Pasetto M, Picchio MC, Innamorati G, Mazzocco M. IFN-gamma-mediated upmodulation of MHC class I expression activates tumor-specific immune response in a mouse model of prostate cancer. Vaccine. 2010;28:3548–57.
Street SE, Cretney E, Smyth MJ. Perforin and interferon-gamma activities independently control tumor initiation, growth, and metastasis. Blood. 2001;97:192–7.
duPre' SA, Redelman D, Hunter Jr KW. Microenvironment of the murine mammary carcinoma 4T1: endogenous IFN-gamma affects tumor phenotype, growth, and metastasis. Exp Mol Pathol. 2008;85:174–88.
Meissner N, Swain S, McInnerney K, Han S, Harmsen AG. Type-I IFN signaling suppresses an excessive IFN-gamma response and thus prevents lung damage and chronic inflammation during Pneumocystis (PC) clearance in CD4 T cell-competent mice. Am J Pathol. 2010;176:2806–18.
Yao Y, Qian Y. Expression regulation and function of NLRC5. Protein Cell. 2013;4:168–75.
Beatty GL, Paterson Y. Regulation of tumor growth by IFN-gamma in cancer immunotherapy. Immunol Res. 2001;24:201–10.
Koskela LR, Poljakovic M, Ehrén I, Wiklund NP, de Verdier PJ. Localization and expression of inducible nitric oxide synthase in patients after BCG treatment for bladder cancer. Nitric Oxide. 2012;27:185–91.
Hosseini A, Koskela LR, Ehrén I, Aguilar-Santelises M, Sirsjö A, Wiklund NP. Enhanced formation of nitric oxide in bladder carcinoma in situ and in BCG treated bladder cancer. Nitric Oxide. 2006;15:337–43.
Andrade PM, Chade DC, Borra RC, Nascimento IP, Villanova FE, Leite LC, Andrade E, Srougi M. The therapeutic potential of recombinant BCG expressing the antigen S1PT in the intravesical treatment of bladder cancer. Urol Oncol. 2010;28:520–5.
Melo GD, Silva JE, Grano FG, Homem CG, Machado GF. Compartmentalized gene expression of toll-like receptors 2, 4 and 9 in the brain and peripheral lymphoid organs during canine visceral leishmaniasis. Parasite Immunol. 2014;12:726–31.
Benhar M, Stamler JS. A central role for S-nitrosylation in apoptosis. Nat Cell Biol. 2005;7:645–6.
Zeini M, Través PG, López-Fontal R, Pantoja C, Matheu A, Serrano M. Specific contribution of p19(ARF) to nitric oxide-dependent apoptosis. J Immunol. 2006;177:3327–36.
Lim LY, Vidnovic N, Ellisen LW, Leong CO. Mutant p53 mediates survival of breast cancer cells. Br J Cancer. 2009;101:1606–12.
Wang XW, Hussain SP, Huo TI, Wu CG, Forgues M, Hofseth LJ, et al. Molecular pathogenesis of human hepatocellular carcinoma. Toxicology. 2002;181:43–7.
Wang C, Chen J. Phosphorylation and hsp90 binding mediate heat shock stabilization of p53. J Biol Chem. 2003;278:2066–71.
Umansky V, Schirrmacher V. Nitric oxide-induced apoptosis in tumor cells. Adv Cancer Res. 2001;82:107–31.
Dameron KM1, Volpert OV, Tainsky MA, Bouck N. Control of angiogenesis in fibroblasts by p53 regulation of thrombospondin-1. Science. 1994;265:1582–4.
Zhang ZG, Zhang L, Jiang Q, Zhang R, Davies K, Powers C, Bruggen Nv, Chopp M. VEGF enhances angiogenesis and promotes blood-brain barrier leakage in the ischemic brain. J Clin Invest. 2000;106:829–38.
Farmabrasilis-Brazil, CNPq-Brazil, FAPESP-Brazil, Fundação Araucária de Apoio ao Desenvolvimento Científico e Tecnológico do Paraná, Maria Claudia Falaschi Nunes and Silmara da Silva Nunes (Domus School) are acknowledged.
This work was supported by Farmabrasilis-Brazil, CNPq-Brazil (Process numbers 490519/2011-3; 475211/2013-8), FAPESP-Brazil (Process numbers 2011/05726-4; 2012/20706-2; 2012/13585-4; 2014/20465-0), NanoBioss/Sisnano (CNPq-Brazil, Process number 402280/2013-0) and Fundação Araucária de Apoio ao Desenvolvimento Científico e Tecnológico do Paraná (Process numbers 225/2014; 656/2014). The funding agencies have no involvement with the design of the study and collection, analysis, interpretation of data and writing the manuscript.
The data set supporting the conclusions of this article is included within the article and its Additional file 5.
PVG, FRFS, AMM, RR and WJF designed the experiments. PVG, FRFS, APC, AGO and WJF performed the experiments. PVG, FRFS, WMJ, AGO, ND, ISN, RR and WJF analyzed data. ISN and OSN developed the P-MAPA Biological Response Modifier. PVG, ISN and WJF wrote the manuscript. All authors read and approved the final manuscript.
Not applicable because this manuscript does not contain any individual persons data.
This study was approved by the Committee for Ethics in Animal use of the University of Campinas – CEUA/UNICAMP, protocol number 2684-1 (Additional file 6).
Laboratory of Urogenital Carcinogenesis and Immunotherapy, Department of Structural and Functional Biology, University of Campinas (UNICAMP), P.O. BOX 6109, zip code 13083-865, Campinas, São Paulo, Brazil
Patrick Vianna Garcia
, Amanda Pocol Carniato
& Wagner José Fávaro
Institute of Biology, North of Parana State University (UENP), Bandeirantes, PR, Brazil
Fábio Rodrigues Ferreira Seiva
Department of Anatomy, Institute of Biosciences, UNESP - Univ Estadual Paulista, Botucatu, SP, Brazil
Wilson de Mello Júnior
Farmabrasilis R&D Division, Campinas, SP, Brazil
Nelson Duran
, Alda Maria Macedo
, Iseu da Silva Nunes
, Odilon da Silva Nunes
NanoBioss, Institute of Chemistry, University of Campinas (UNICAMP), Campinas, SP, Brazil
Department of Internal Medicine, University of Campinas (UNICAMP), Campinas, SP, Brazil
Alexandre Gabarra de Oliveira
Department of Physical Education, São Paulo State University (UNESP), Rio Claro, SP, Brazil
Institute of Cell Biology, Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia
Rok Romih
Search for Patrick Vianna Garcia in:
Search for Fábio Rodrigues Ferreira Seiva in:
Search for Amanda Pocol Carniato in:
Search for Wilson de Mello Júnior in:
Search for Nelson Duran in:
Search for Alda Maria Macedo in:
Search for Alexandre Gabarra de Oliveira in:
Search for Rok Romih in:
Search for Iseu da Silva Nunes in:
Search for Odilon da Silva Nunes in:
Search for Wagner José Fávaro in:
Correspondence to Wagner José Fávaro.
Percentage of histopathological changes of the urinary bladder of rats from CONTROL, MNU, MNU-BCG and MNU-P-MAPA groups. (DOCX 61 kb)
Semiquantitative analysis of immunolabelled antigens of the urinary bladder of rats in the different experimental groups. (DOCX 165 kb)
Additional file 3: Figures S1a–S1h.
Immunolabelled Ki-67 intensities and detection of apoptosis in the urinary bladder from the CONTROL (a, b), MNU (c, d), MNU-BCG (e, f), and MNU-P-MAPA (g, h) groups. (a), (c), (e) and (g) DNA fragmentation (arrows) in the urothelium. (b), (d), (f) and (h) Ki-67 immunoreactivities (arrows) in the urothelium. a–h: Ur urothelium. (JPG 1261 kb)
Additional file 4: Figures S2a–S2b.
(a) Schematic representation of the mechanism of action of BCG involving TLRs signaling pathway. (b) Hypothetical mechanism of P-MAPA immunotherapy (Developed by Wagner José Fávaro and Farmabrasilis). (JPG 1048 kb)
Availability of Data and Materials (DOCX 25 kb)
Ethics Approval (JPG 1348 kb)
Toll-like Receptor
P-MAPA
Bacillus Calmette–Guerin | CommonCrawl |
\begin{document}
\title{Improving the speed of variational quantum algorithms\\ for quantum error correction}
\author{Fabio Zoratti} \affiliation{Scuola Normale Superiore, I-56126 Pisa, Italy} \author{Giacomo De Palma} \affiliation{Department of Mathematics, University of Bologna, 40126 Bologna, Italy} \author{Bobak Kiani} \affiliation{Department of Electrical Engineering and Computer Science, MIT, 77 Massachusetts Avenue, Cambridge, MA 02139, USA} \author{Quynh T. Nguyen} \affiliation{Department of Electrical Engineering and Computer Science, MIT, 77 Massachusetts Avenue, Cambridge, MA 02139, USA} \author{Milad Marvian} \affiliation{Department of Electrical and Computer Engineering and Center for Quantum Information and Control, University of New Mexico, USA} \author{Seth Lloyd} \affiliation{\mbox{Department of Mechanical Engineering, MIT 77 Massachusetts Avenue, Cambridge, MA 02139, USA} \\Turing Inc., Cambridge, MA 02139, USA} \author{Vittorio Giovannetti} \affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy}
\begin{abstract}
We consider the problem of devising a suitable Quantum Error Correction (QEC) procedures for a generic quantum noise acting on a quantum circuit. In general, there is no analytic universal procedure to obtain the encoding and correction unitary gates, and the problem is even harder if the noise is unknown and has to be reconstructed. The existing procedures rely on Variational Quantum Algorithms (VQAs) and are very difficult to train since the size of the gradient of the cost function decays exponentially with the number of qubits. We address this problem using a cost function based on the Quantum Wasserstein distance of order 1 ($QW_1$). At variance with other quantum distances typically adopted in quantum information processing, $QW_1$
lacks the unitary invariance property which makes it a suitable tool to avoid to get trapped in
local minima.
Focusing on a simple noise model for which an exact QEC solution is known and can be used as a theoretical benchmark, we run a series of numerical tests that show how, guiding the VQA search through the $QW_1$, can indeed significantly increase both the probability of a successful training and the fidelity of the recovered state, with respect to the results one obtains when using conventional approaches.\end{abstract}
\maketitle
\section{Introduction}
Performing reliable computations on physically imperfect hardware is something that has become usual nowadays, given the current state of classical computers, which can produce perfect results without any software-side mitigation of the imperfections of the physical media where the computation happens. Error correction is based on the fact that these machines automatically perform, on the hardware side, procedures that allow errors to happen and to be fixed without any intervention from the end user. This kind of setting is even more crucial in a quantum scenario where the current noisy intermediate-scale quantum computers (NISQ) have a much larger error rate than their classical counterparts~\cite{nisq_review_2022}. Performing reliable computations with a trustworthy error correction procedure has direct implications not only in quantum computation~\cite{preskill_2012, Preskill_2018}, but potentially also in all the other sectors of quantum technology which indirectly relay on it~(e.g.~quantum communication or quantum key distribution~\cite{Gisin2002, Lo2014,Pirandola2020}).
In the typical Quantum Error Correction (QEC) scheme, the quantum information that has to be protected is stored in a subspace of a larger Hilbert space, using an \emph{encoding} procedure. Stabilizer codes~\cite{knill2001_5qubits}, which are within the best analytical results in this field, are not universal because they are tailored for a generic noise acting on a small but unknown subset of qubits. Several attempts have already been made to create a numerical optimization procedure to find an error correction code for specific noise models~\cite{fletcher_2008, robust_qec_2008, soraya_2010, Chiani2020}, but these studies are not universal because they rely heavily on the type of noise on the specific quantum circuit and this is a problem because real quantum devices are not characterized by a single kind of quantum noise. Some attempts have been made to characterize the noise of the current and near-term devices~\cite{Koch_2007,PhysRevLett.114.010501}, but these methods will become very difficult to implement soon because classical computers are not able to simulate efficiently quantum circuits when the number of qubits increases. Near-term devices with approximately 50 qubits may already be intractable to simulate for supercomputers~\cite{Boixo_2018}.
If we define a figure of merit of the quality of the state after the action of the noise and its corresponding correction, the obvious choice for the kind of maximization algorithm is a Variational Quantum Algorithm (VQA)~\cite{cerezo_vqa2021}. These are hybrid algorithms that couple a quantum computer with a classical one. In this setting, usually, a parametric quantum circuit is applied to some reference state, some measurements are performed on the system, and the outcomes are given to the classical computer to perform a minimization procedure of a given cost function (from this point of view the optimization procedure in a VQA can be seen as the training phase in machine learning). Some examples of this class of algorithms are the variational quantum eigensolver~\cite{vqe_review_2021} and the Quantum Approximate Optimization Algorithm~\cite{qaoa}. Proposals to use VQAs to address QEC problems are already present in literature~\cite{johnson2017qvector}. Unfortunately, VQAs usually suffer from the phenomenon of barren plateaus~\cite{barren_plateaus_2018,cerezo_barren_2021}, namely the gradient of the cost function decays exponentially with respect to the number of qubits of the system, leading to an untrainable model. The fundamental theoretical reason for such behavior has been associated with the presence of {\it barren plateaus} which originate when the cost function of the problem is global, i.e. mediated by a highly non-local operator~\cite{cerezo_barren_2021}.
To avoid these effects we propose here to guide the VQA search using cost functions inspired to
Quantum Wasserstein distance of order~$1$ (or $QW_1$ in brief) introduced in
Ref.~\cite{gdp_wasserstein_order_1} as a quantum generalization of the Hamming distance \cite{hamming_distance} on the set of bit strings. As will detail in the following, at variance with more conventional
quantum distances typically adopted in quantum information, $QW_1$
is lacking a fundamental symmetry (unitary invariance) which makes it a suitable candidate to
avoid the barren plateau problem. The rationale behind this is that for unitarily invariant distances as the trace distance or the distances derived from the fidelity, all the states of the computational basis are equally orthogonal and thus have all maximum distance one with respect to the other. The $QW_1$ functional instead measures how many qubits are different between the two states allowing the VQA gradient to be less flat in the regions that are not already very close to a local minimum. While this special property of $QW_1$ has been already observed in other contexts, such as the study of
quantum Generative Adversarial Networks presented in \cite{gdp2021ghz,kim2022hamiltonian,herr2021anomaly,anschuetz2022beyond,coyle2022machine,chakrabarti2019quantum}, here we test its effectiveness in the identifying effective QEC procedures.
For this purpose, we run a series of numerical tests which compare the performances of a VQA
that adopts a conventional (i.e. unitary invariant) cost function, with that of a VQA which instead
refers to $QW_1$-like distances. Our findings confirm that in the second case the effectiveness of the numerical optimization
significantly increases both in terms of the probability of a successful training and in
the fidelity of the recovered state.
The manuscript is organized as follows: in Sec. \ref{sec:W1} we present a concise, yet rather complete review on the $QW_1$ distance for qubits; in Sec.~\ref{sec:introQEC} we present some basic notions on conventional QEC procedures which allow us to set the notation and the theoretical background; in Sec.~\ref{sec:general} we introduce our VQA discussing the different choices of cost functions that can be used in order to guide it; in Sec.~\ref{sec:res} we present our numerical results where comparing the performances of the VQA implemented with different types of cost functions. Conclusions are given in Sec.~\ref{sec:con}.
\section{The quantum Wasserstein distance of order 1 for qubits}\label{sec:W1}
The theory of optimal mass transport \cite{villani2008optimal,Ambrosio2008,ambrosio2013user} considers probability distributions on a metric space as distributions of a unit amount of mass. The key element of such theory is the Monge--Kantorovich distance between probability distributions, which is the minimum cost that is required to transport one distribution onto the other, assuming that moving a unit of mass for a unit distance has cost one \cite{monge_wasser, kantorovich_wasser, vershik_2013}. Such distance is also called earth mover's distance or Wasserstein distance of order $1$, often shortened to $W_1$ distance. The exploration of the theory of optimal mass transport has led to the creation of an extremely fruitful field in mathematical analysis, with applications ranging from differential geometry and partial differential equations to machine learning \cite{Ambrosio2008, peyre2019computational,vershik_2013}.
The most natural distance on the set of the strings of $n$ bits is the Hamming distance~\cite{hamming_distance}, which counts the number of different bits. The resulting $W_1$ distance on the set of the probability distributions on strings of $n$ bits is called Ornstein's $\bar{d}$-distance~\cite{ornstein1973application}. Ref.~\cite{gdp_wasserstein_order_1} proposed a generalization of the $W_1$ distance to the space of the quantum states of a finite set of qubits, called quantum $W_1$ distance (or $QW_1$ in brief). The generalization is based on the notion of neighboring quantum states. Two quantum states of a finite set of qubits are neighboring if they coincide after discarding one qubit. The quantum $W_1$ distance of Ref.~\cite{gdp_wasserstein_order_1} is the distance induced by the maximum norm that assigns distance at most $1$ to any couple of neighboring states; in the case of quantum states diagonal in the computational basis it recovers Ornstein's $\bar{d}$-distance
and inherits most of its properties
The $QW_1$ quantity can be computed with a semidefinite program, whose formulation requires to define a notion of Lipschitz constant for quantum observables. The Lipschitz constant of the observable $\hat{H}$ acting on the Hilbert space of $n$ qubits is \cite{gdp_wasserstein_order_1} \begin{equation}\label{eq:L}
\|\hat{H}\|_L = 2\max_{i=1,\,\ldots,\,n}\min_{\hat{H}_{i^c}}\left\|\hat{H} - \hat{\mathbb{I}}_i\otimes \hat{H}_{i^c}\right\|_\infty\,, \end{equation} where the minimization is performed over all the observables $\hat{H}_{i^c}$ that \emph{do not} act on the $i$-th qubit. The quantum $W_1$ distance between the quantum states $\hat{\rho}$ and $\hat{\sigma}$ can then be expressed as \cite{gdp_wasserstein_order_1} \begin{equation}
\|\hat{\rho} - \hat{\sigma}\|_{W_1} = \max_{\|\hat{H}\|_L\le1}\mathrm{Tr}\left[\left(\hat{\rho}-\hat{\sigma}\right)\hat{H}\right]\,. \end{equation} The present paper is based on the following lower bound to the quantum $W_1$ distance. Let \begin{equation}\label{eq:HW}
\hat{H}^{(\mathrm{wass})} = \sum_{i=1}^n |1\rangle_i\langle1| \otimes \hat{\mathbb{I}}_{i^c}\;, \end{equation}
be the quantum observable that counts the number of ones in the computational basis. We have $\left\|\hat{H}^{(\mathrm{wass})}\right\|_L = 1$ \cite{gdp_wasserstein_order_1}, therefore for any quantum state $\hat{\rho}$ we have \begin{equation}
\left\|\hat{\rho} - |0\rangle\langle0|^{\otimes n}\right\|_{W_1} \ge \mathrm{Tr}\left[\hat{\rho}\,\hat{H}^{(\mathrm{wass})}\right]\,. \end{equation}
$QW_1$ has found several applications in quantum information theory and many-body quantum physics, among which we mention a proof of the equivalence between the microcanonical and the canonical ensembles of quantum statistical mechanics \cite{de2022quantum} and a proof of limitations of VQA~\cite{de2022limitations,chou2022limitations}. Furthermore, $QW_1$ has been extended to quantum spin systems on infinite lattices \cite{de2022wasserstein}. In the context of quantum state tomography, the quantum $W_1$ distance has been employed as a quantifier of the quality of the learned quantum state and has led to efficient algorithms to learn Gibbs states of local quantum Hamiltonians \cite{rouze2021learning,maciejewski2021exploring,onorati2023efficient}. In the context of quantum machine learning, the quantum $W_1$ distance has been employed as a cost function of the quantum version of generative adversarial networks \cite{gdp2021ghz,herr2021anomaly,anschuetz2022beyond,coyle2022machine}.
\subsection{Related approaches}
Several quantum generalizations of optimal transport distances have been proposed. One line of research by Carlen, Maas, Datta and Rouz\'e \cite{carlen2014analog,carlen2017gradient,carlen2020non,rouze2019concentration,datta2020relating,van2020geometrical,wirth2022dual} defines a quantum Wasserstein distance of order $2$ from a Riemannian metric on the space of quantum states based on a quantum analog of a differential structure. Exploiting their quantum differential structure, Refs. \cite{rouze2019concentration,carlen2020non,gao2020fisher} also define a quantum generalization of the Lipschitz constant and of the Wasserstein distance of order $1$. Alternative definitions of quantum Wasserstein distances of order $1$ based on a quantum differential structure are proposed in Refs.~\cite{chen2017matricial,ryu2018vector,chen2018matrix,chen2018wasserstein}. Refs.~\cite{agredo2013wasserstein,agredo2016exponential,ikeda2020foundation} propose quantum Wasserstein distances of order $1$ based on a distance between the vectors of the canonical basis.
Another line of research by Golse, Mouhot, Paul and Caglioti~\cite{golse2016mean,caglioti2021towards,golse2018quantum,golse2017schrodinger,golse2018wave, caglioti2019quantum,friedland2021quantum, cole2021quantum, duvenhage2021optimal,bistron2022monotonicity,van2022thermodynamic} arose in the context of the study of the semiclassical limit of quantum mechanics and defines a family of quantum Wasserstein distances of order $2$. Ref.~\cite{de2021quantumAHP} proposes another quantum Wasserstein distance of order $2$ where the optimal transport is implemented with quantum channels.
The quantum Wasserstein distance between two quantum states can be defined as the classical Wasserstein distance between the probability distributions of the outcomes of an informationally complete measurement performed on the states, which is a measurement whose probability distribution completely determines the state. This definition has been explored for Gaussian quantum systems with the heterodyne measurement in Refs.~\cite{zyczkowski1998monge,zyczkowski2001monge,bengtsson2017geometry}.
\section{Preliminaries on QEC}\label{sec:introQEC} Let $Q$ be a quantum register we wish to protect (at least in part) from the action of some external noise source. In a typical QEC scenario~\cite{nielsen00} this problem is addressed through the following three-step procedure: \begin{itemize}
\item[{\it i)}] Before the action of the noise, a unitary encoding gate $\hat{V}_{QA}$ is used to distribute the information originally contained in $Q$ on the larger system $QA$. Here $A$ is an auxiliary quantum register that is assumed to be initialized in a fiduciary quantum state, and that is affected by the same noise that tampers with $Q$; \item[{\it ii)}] After the action of the noise, a measurement on $QA$ is performed to reveal the nature of the latter and, based on the associated outcome, a unitary recovery operation is applied to the system. Equivalently this step can be described by introducing yet an extra quantum register $B$ (also initialized on a fiduciary state but {\it not} affected by the noise) that is coupled with $QA$ trough a recovering unitary transformation $\hat{W}_{QAB}$ which effectively mimics the measurement and the recovery operation; \item[{\it iii)}] The inverse of the gate $\hat{V}_{QA}$ is finally used on $QA$ to refocus the recovered information in $Q$. \end{itemize}
Denoting with $|\psi\rangle_Q$ the input state of $Q$, the corresponding output state of $QA$ that emerges from the process at the end of the step {\it iii)} can be expressed as the density matrix \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\!\!\hat{\rho}^{(V,W)}_{QA}(\psi) := {\tr}_B\Big\{ {\cal V}^\dag_{QA}\circ {\cal W}_{QAB} \circ \Phi_{Q A} \\ \nonumber
&&\quad \qquad \quad \circ {\cal V}_{QA} \Big(|\psi\rangle_{Q}\langle \psi| \otimes |\O\rangle_{A}\langle \O| \otimes |\O\rangle_{B}\langle \O|\Big) \Big\} \\ &&\quad \quad := {\cal V}^\dag_{QA}\circ \Phi^{(R)}_{QA} \circ
\Phi_{Q A} \circ {\cal V}_{QA} \Big(|\psi\rangle_{Q}\langle \psi| \otimes |\O\rangle_{A}\langle \O| \Big) \nonumber \end{eqnarray}
where $|\O\rangle_X$ represents the fiduciary state of the $X$ register, ${\tr}_B\{\cdots\}$ is the partial trace over $B$, and given a unitary $\hat{U}_X$ on $X$ we adopted the symbol
${\cal U}_X(\cdots) := \hat{U}_X\cdots \hat{U}_X^\dag$ to denote its action as super-operator. In the above expressions $\Phi_{QA}$ is the LCPT
quantum channel~\cite{nielsen00} describing the noise on $Q$ and $A$, while $ \Phi^{(R)}_{QA} (\cdots) : ={\tr}_B\{ {\cal W}_{QAB} (\cdots \otimes |\O\rangle_B\langle \O|)\}$ is the LCPT (recovery) quantum channel on $QA$ originating from the interaction with $B$, that attempts to undo the action of~$\Phi_{QA}$.
An ideal QEC procedure able to completely remove the noise from the system will make sure that $\hat{\rho}^{(V,W)}_{QA}(\psi)$ corresponds to $|\psi\rangle_{Q}|\O\rangle_A$,
irrespectively from the specific choice of $|\psi\rangle_{Q}$. A bona-fide figure of merit to characterize the effectiveness of a generic QEC scheme is hence provided by the average input-output fidelity \begin{eqnarray}\label{defFAV}
\overline{F}{(V,W)}&:=& \int d\mu_{\psi}\; {_{Q}\langle} \psi | {_{A}\langle} \O| \hat{\rho}^{(V,W)}_{QA}(\psi) |\psi\rangle_Q|\O\rangle_A \;, \end{eqnarray} where $d\mu_{\psi}$ is the uniform measure on the set of the input states of $Q$ originated from the Haar measure on the associated unitary group~\cite{vinberg_groups_representations} or from an exact or approximate unitary 2-design ${\cal S}$~\cite{2design_definition,nielsen00}
that simulates the latter\footnote{We remind that a unitary $2$-design is a probability distribution over the set of unitary operators which can duplicate properties of the probability distribution over the Haar measure for polynomials of degree $2$ or less. When $Q$ is a single qubit, a 2-design can be realized by a uniform sampling over a set ${\cal S}$ composed by only 6 elements $\iid$, $\hat{\sigma}_1$, $e^{\pm i \pi/4 \hat{\sigma}_1}$, $e^{\pm i \pi/4 \hat{\sigma}_2}$ that maps its logical state $|0\rangle_Q$ into the vectors $\ket{0}_Q, \ket{1}_Q, (\ket{0}_Q\pm i \ket{1}_Q)/\sqrt{2}, (\ket{0}_Q\mp \ket{1}_Q)/\sqrt{2}$.}. Notice that by expressing $|\psi\rangle_Q= \hat{U}_Q |\O\rangle_Q$, Eq.~(\ref{defFAV}) can equivalently be casted in the more compact form
\begin{eqnarray}\label{defFAV1}
\overline{F}{(V,W)}&=& {_{QA}\langle}\O| \hat{\rho}^{(V,W)}_{QA} |\O\rangle_{QA}\;, \end{eqnarray}
with $|\O\rangle_{QA}:=|\O\rangle_Q\otimes |\O\rangle_A$ and where the state
\begin{eqnarray}
\hat{\rho}^{(V,W)}_{QA} &:=&\frac{1}{|{\cal S}|} \sum_{\hat{U}_Q\in {\cal S}} \; {\cal U}^\dag_{Q} \circ {\cal V}^\dag_{QA}\circ \Phi^{(R)}_{QA} \circ \Phi_{Q A} \nonumber \\ &&\circ\; {\cal V}_{QA}
\circ \; {\cal U}_{Q} \Big(|\O\rangle_{QA}\langle \O| \Big)\;, \label{SAMP}
\end{eqnarray}
now includes the average over all possible inputs.
An ideal QEC procedure will enable one to get $\overline{F}{(V,W)}=1$. A natural benchmark for lowest admissible $\overline{F}{(V,W)}$ is represented instead by the value one would get if one decides not to perform corrections on the register that we compute by
setting $\hat{V}_{QA}$ and $\hat{W}_{QAB}$ equal to the identity operators
i.e.\footnote{Equation~(\ref{fdffs}) accounts for the noise effects both on $Q$ {\it and} $A$. A more conservative estimation of $\overline{F}_0$ can be obtained by focusing directly on the noise on $Q$ alone, i.e. tracing out the $A$ component of $\hat{\rho}^{(\openone,\openone)}_{QA}$ and studying its fidelity with $|\O\rangle_Q$, i.e.
$\overline{F}^{(\rm strong)}_0 := {_{Q}\langle}\O| \hat{\rho}^{(\openone,\openone)}_{Q} |\O\rangle_{Q}\geq \overline{F}_0$, with $\hat{\rho}^{(\openone,\openone)}_{Q}:=\tr_A\hat{\rho}^{(\openone,\openone)}_{QA}$. Notice that for the noise model of Sec.~\ref{sec:noise} the two are directly connected via the identity
$\overline{F}_0= \overline{F}^{(\rm strong)}_0 - \frac{n-1}{n} p (1-|\langle 0|\hat{\sigma}|0\rangle|^2)$.}
\begin{eqnarray} \label{fdffs}
\overline{F}_0 := {_{QA}\langle}\O| \hat{\rho}^{(\openone,\openone)}_{QA} |\O\rangle_{QA} \;.
\end{eqnarray}
\begin{figure}\label{fig:solution_3qubit_V}
\end{figure}
\begin{figure}
\caption{(Color online) Sketch of the variational quantum algorithm: $Q$, $A$ and $B$ are quantum registers
formed respectively by $k$, $n-k$ and $r$ qubits.
The initial information we wish to protect is written in $Q$ by the unitary gate $\hat{U}_Q(j)$ extracted from a 2-design set ${\cal S}$;
$A$ and $B$ are two auxiliary elements (containing respectively $n-k$ and $r$ qubits) that are used to implement the QEC procedure described by the parametric gates $\hat{V}_{QA}(\vec{\alpha})$,
$\hat{W}_{QAB}(\vec{\beta})$, and $\hat{V}_{QA}^\dag(\vec{\alpha})$ of Fig.~\ref{fig:solution_3qubit_V}. The patterned element in the central part of the scheme represents the noise on $Q$ and $A$ (no noise is assumed to be active on $B$).
Lastly, the D-shaped measurements at the end of the circuit represent local measurements on $QA$ whose outcomes over
the entire collection of the possible inputs generated by the entire set ${\cal S}$,
are processed by a classical computer which, evaluating the cost function $C(\vec{\alpha},\vec{\beta})$ defined in \cref{sec:descent_algorithm},
decides how to update the values of the parameters $\vec{\alpha}$ and $\vec{\beta}$.
Thick grey lines in the figure represent classical control lines.}
\label{fig:general_scheme_qcircuit}
\end{figure}
\section{Variational Quantum Algorithm}\label{sec:general} While enormous progress has been made in the study of QEC procedures, identifying
efficient choices for the operations that lead to (non trivial) high values of $\overline{F}{(V,W)}$ for a
specific noise model, is still a challenging open problem. A possible solution, in this case, is to employ variational quantum algorithms to run numerical searches. Our approach follows a training strategy
inspired by the work of Johnson \emph{et al.}~\cite{johnson2017qvector}.
Assuming hence $Q$, $A$, and $B$ to be formed by collections of independent qubits ($k$ for
$Q$, $n-k$ for $A$, and $r$ for $B$), we introduce a manifold of transformations $\hat{V}_{QA}(\vec{\alpha})$, $\hat{W}_{QAB}(\vec{\beta})$ parametrized by classical controls vectors $\vec{\alpha}$, $\vec{\beta}$ (see \cref{fig:solution_3qubit_V}),
and construct the quantum circuit of~\cref{fig:general_scheme_qcircuit}. The method then proceeds along the following stages:
\begin{enumerate} \item Having selected the values of $\vec{\alpha}$ and~$\vec{\beta}$, the register $Q$
is prepared into a collection of known quantum state $\{|\psi{(1)}\rangle_Q, \cdots , |\psi{(m)}\rangle_Q\}$ operating on the vector $|\O\rangle_Q=|0\rangle^{\otimes k}$ through action of the control
gates $\hat{U}_Q{(1)},\cdots, \hat{U}_Q{(m)}$ (first cyan element of the figure) which define the 2-design ${\cal S}$ entering in Eq.~(\ref{SAMP}). Each of such inputs is hence evolved via a circuit (pale-orange area of the figure)
that emulates both the effect of the noise (patterned square of the figure, see~\cref{sec:noise} and Fig.~\ref{fignoise}), and the transformations $\hat{V}_{QA}(\vec{\alpha})$,
$\hat{W}_{QAB}(\vec{\beta})$, and $\hat{V}_{QA}^\dag(\vec{\alpha})$ that are meant to implement the steps {\it ii)} and {\it iii)} of the
QEC procedure (green and red elements of the figure). Notice that in the ideal case (i.e. if $\hat{V}_{QA}(\vec{\alpha})$ and $\hat{W}_{QAB}(\vec{\beta})$ manage to completely suppress the noise)
then in correspondence with the input $|\psi{(j)}\rangle_Q$ the registers $QA$
should emerge in the state $|\psi{(j)}\rangle_Q\otimes |\O\rangle_A :=|\psi{(j)}\rangle_Q\otimes |0\rangle^{\otimes n-k}$, which
will be hence mapped into the final configuration $|\O\rangle_{QA}:=|0\rangle^{\otimes n}$ by the inverse $\hat{U}_Q^\dag(j)$ of the state preparation gate (second cyan element of the figure).
\item For each choice of the index $j\in\{1,\cdots,m\}$ a measurement on the system is performed at the end of the transformations described in stage 1 and the resulting $m$ collected outcomes used to compute a cost function $C(\vec{\alpha},\vec{\beta})$ which evaluates the effectiveness of the adopted QEC strategy in leading large values of the average input-output fidelity. The specific choice of the cost function is very important and is discussed in \cref{sec:cost_function}.
\item A classical computer decides, given the results of the measurement, how to change the value of the parameters $\vec{\alpha}$ and $\vec{\beta}$ to be used in the subsequent run in order to minimize the cost function $C(\vec{\alpha},\vec{\beta})$. This is discussed in detail in \cref{sec:descent_algorithm}.
\end{enumerate}
\subsection{Cost function}\label{sec:cost_function} The natural choice for the cost function at the stage 2 of our algorithm is provided by the expectation value of the self-adjoint operator \begin{eqnarray}\label{Hfid} \hat{H}^{(\rm fid)}_{QA}:= \iid_{QA} - \ket{\O}_{QA}\bra{\O}\;,\end{eqnarray} computed on the mean state of system $QA$ which emerges at the output of the quantum circuit of~\cref{fig:general_scheme_qcircuit}, i.e. the quantity \begin{eqnarray} \label{costfid} C^{(\rm fid)}(\vec \alpha, \vec \beta) &:=& \tr\{\rrho_{QA}^{(V(\vec{\alpha}),W(\vec{\beta}))} \hat{H}^{(\rm fid)}_{QA}\} \;, \end{eqnarray} where $\rrho_{QA}^{(V(\vec{\alpha}),W(\vec{\beta}))}$ is the density matrix (\ref{SAMP}) evaluated for
$\hat{V}_{QA}= \hat{V}_{QA}(\vec{\alpha})$ and $\hat{W}_{QAB}= \hat{W}_{QAB}(\vec{\beta})$. This choice has two main advantages. First of all, the expectation value $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ can be evaluated by performing (simple) local measurement on the qubits of $Q$ and $A$ (indeed it can be computed by simply checking whether of not each one of them is in the logical state $|0\rangle$). Most importantly, since by explicit evaluation one has that $C^{(\rm fid)}(\vec \alpha, \vec \beta) = 1 - \overline{F}{(V(\vec{\alpha}),W(\vec{\beta})})$, it is clear that by using (\ref{costfid}) the algorithm will be forced to look for values of $\vec \alpha$, $\vec \beta$ that yield higher average input-output fidelities. Despite all this, the use of $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ as a cost function, has a major drawback associated with the fact that the spectrum of the Hamiltonian $\hat{H}^{(\rm fid)}_{QA}$ exhibits maximum degeneracy with respect to space orthogonal to the target state $|\O\rangle_{QA}$ (see Fig.~\ref{figurespectra}). Due to this fact a numerical search based on a training procedure that simply target the minimization of $C^{(\rm fid)}(\vec \alpha, \vec \beta)$, has non trivial chances to
get stuck somewhere in the large flat plateau associated with the eigenvalue 1 of $\hat{H}^{(\rm fid)}_{QA}$ without finding any good direction.
in the large flat plateau A possible way to avoid this problem is to introduce new cost-functions Hamiltonians which, while maintaining the target vector $|\O\rangle_{QA}$ as a unique ground state and still being easy to compute, manage to remove the huge degeneracy of the excited part of the spectra of $\hat{H}^{(\rm fid)}_{QA}$. Our choice is based on the quantum Wasserstein distance of order 1 ($W_1$) introduced Ref.~\cite{gdp_wasserstein_order_1} which, even though it lacks some interesting properties that the fidelity has, is less likely to be affected by the barren plateaus phenomena~\cite{cerezo_barren_2021}. As mentioned in Sec.~\ref{sec:W1}
good estimation of the $W_1$ distance that separate $\rrho_{QA}^{(V(\vec{\alpha}),W(\vec{\beta}))}$ from the target state, is provided by the following quantity \begin{eqnarray}
C^{(\rm wass)} (\vec \alpha, \vec \beta) &:=& \tr\{\rrho_{QA}^{(V(\vec{\alpha}),W(\vec{\beta}))} \hat{H}_{QA}^{(\rm wass)} \}\;, \label{eq:cost_function} \\ \label{Hwass}
\hat{H}_{QA}^{(\rm wass)} &:=&
\dsum_{j = 1}^n j \; \hat{\Pi}_{QA}^{(j)} \;, \end{eqnarray} where $\hat{H}_{QA}^{(\rm wass)}$ is the Hamiltonian~(\ref{eq:HW}) which we express here in terms of the projectors
$\hat{\Pi}^{(j)}_{QA}$ on the sub-space of the register $QA$ in which we have $j$ qubits in $|1\rangle$ and the remaining one in $|0\rangle$. Observe that, as already anticipated,
$\hat{H}_{QA}^{(\rm wass)}$ is the sum of the number operators acting on the individual qubits of the register $QA$ as in \eqref{eq:HW}: accordingly, as $C^{(\rm fid)} (\vec \alpha, \vec \beta)$, $C^{(\rm wass)} (\vec \alpha, \vec \beta)$ can be computed from local measurement. What $C^{(\rm wass)} (\vec \alpha, \vec \beta)$ does is to count the total number of logical ones present in the system.
To understand why using~(\ref{eq:cost_function}) could in principle lead to a more efficient numerical search than the one obtained by using (\ref{costfid}), notice that Eq.~(\ref{Hfid}) can be equivalently written as $\hat{H}_{QA}^{(\rm fid)} = \dsum_{j = 1}^n \hat{\Pi}^{(j)}_{QA}$. A comparison with~(\ref{Hwass}) reveals hence that indeed while both $\hat{H}_{QA}^{(\rm fid)}$ and
$\hat{H}_{QA}^{(\rm wass)}$ admit $|\O\rangle_{QA}$ as a unique ground state, the Wasserstein Hamiltonian removes large part of the degeneracy of the high energy spectrum of the fidelity Hamiltonian. Accordingly, it is reasonable to expect that a numerical search that uses $\hat{H}_{QA}^{(\rm wass)}$, has fewer chances to get trapped into regions of constant energy (barren plateau) than a search based on $\hat{H}_{QA}^{(\rm fid)}$,\footnote{It goes without mentioning that alternative choices for the cost function Hamiltonians are also available. For instance, one can use operators that also remove the residual degeneracies that affect $\hat{H}_{QA}^{(\rm wass)}$ -- e.g. using the operator
$\hat{H}_{QA}^{(\rm full)} = \sum_{\ell =1}^n w_{\ell} \hat{\pi}_\ell$ with $\omega_\ell$ positive weights selected so that different allocation of $|1\rangle$ states inside the eigenspaces of $\hat{H}_{QA}^{(\rm wass)}$ get an assigned ordering. Our numerical analysis however seems to indicate that these refinements do not contribute significantly in improving numerical search of the algorithm.}.
\begin{figure}
\caption{Pictorial rendering of the spectra of the Hamiltonians $\hat{H}_{QA}^{(\rm fid)}$ (top panel) and
$\hat{H}_{QA}^{(\rm wass)}$ (lower panel). While $\hat{H}_{QA}^{(\rm fid)}$ is characterized by a unique, flat plateau
that includes all the excited state, $\hat{H}_{QA}^{(\rm wass)}$ partially removes the associated degeneracy assigning
higher energy to subspaces that have higher number of qubits in the logical state $|1\rangle$.
}
\label{figurespectra}
\end{figure}
\subsection{Descent algorithm}\label{sec:descent_algorithm} The algorithm that we used for this work is a gradient descent algorithm with momentum~\cite{NoceWrig06}. To overcome the numerical difficulties of using finite differences to estimate the gradients of the cost function $C(\vec{\alpha},\vec{\beta})$, we exploit a variation of the parameter-shift rule introduced in~\cite{schuld_analytical_gradients} which reduces the problem to compute linear combinations of the function itself evaluated in different points that are not infinitesimally close.
Specifically, we observe that, irrespectively from the choice of the operator $\hat{H}_{QA}$, the functional dependence of $C(\vec \alpha, \vec \beta)$ upon the $j$-th component of the vector $\vec{\beta}$ is of the form \begin{equation}
\label{eq:example_parameter_shift}
C(\vec \alpha, \vec \beta) = f(\beta_j):=\sum_{k} \tr\big\{\hat{\Omega}^{(k)}_1e^{i\beta_j \hat{\sigma}}\hat{\Omega}^{(k)}_2 e^{-i\beta_j\hat{\sigma}}\big\}, \end{equation} with $\hat{\Omega}_{1,2}^{(k)}$ being multi-qubits operators which do not depend upon $\beta_j$, and with $e^{-i\beta_j \hat{\sigma}}$ a single qubit rotation generated by an element $\hat{\sigma}$ of the Pauli set. Therefore its gradient can be written as \begin{eqnarray}
\label{eq:gradient_example_parameter_shift}
\frac{\partial C(\vec \alpha, \vec \beta) }{\partial \beta_j} &=& i \sum_k \tr\big\{ \hat{\Omega}^{(k)}_1e^{i\beta_j \hat{\sigma}} [\hat{\sigma},\hat{\Omega}^{(k)}_2] e^{-i\beta_j \hat{\sigma}}\big\} \nonumber \\
&=& f(\beta_j + \tfrac{\pi}{4}) - f(\beta_j - \tfrac{\pi}{4})\;, \end{eqnarray} where in the last passage we used the identity \begin{align}\label{eed}
i [\hat{\sigma},\hat{\Omega}^{(k)}_2]= e^{i\frac{\pi}{4} \hat{\sigma}} \hat{\Omega_2}^{(k)} e^{-i\frac{\pi}{4}\hat{\sigma}} - e^{-i\frac{\pi}{4} \hat{\sigma}} \hat{\Omega_2}^{(k)} e^{i\frac{\pi}{4}\hat{\sigma}}. \end{align} The gradient with respect the vector $\vec{\alpha}$ can be computed similarly. In this case however we observe that, due to the fact that $\hat{\rho}^{(V(\vec{\alpha}),W(\vec{\beta}))}_{QA}(\psi)$ depends upon the parameters $\vec{\alpha}$ via $\hat{V}_{QA}(\vec{\alpha})$ and through its adjoint $\hat{V}_{QA}^\dag(\vec{\alpha})$,
the dependence of $C(\vec \alpha, \vec \beta)$ upon the $j$-th component of $\vec{\alpha}$ is slightly more complex. Indeed in this case we have \begin{eqnarray}
\label{eq:example_parameter_shift_alpha}
C(\vec \alpha, \vec \beta) &=& g(\alpha_j,\alpha_j) \;, \end{eqnarray}
where $g(\alpha^{(1)}_j,\alpha^{(2)}_j)$ is the function
\begin{eqnarray} g(\alpha^{(1)}_j,\alpha^{(2)}_j)
:=\sum_k &&\tr\big\{\hat{\Omega}^{(k)}_1e^{i\alpha^{(1)}_j \hat{\sigma}}\hat{\Omega}^{(k)}_2 e^{-i\alpha^{(1)}_j\hat{\sigma}} \\\nonumber
&& \qquad \times \hat{\Omega}^{(k)}_3 e^{i\alpha^{(2)}_j\hat{\sigma}}
\hat{\Omega}^{(k)}_4 e^{-i\alpha^{(2)}_j\hat{\sigma}}
\big\}\;, \end{eqnarray} with $\hat{\Omega}^{(k)}_{1,2,3,4}$ representing multi-qubits operators which do not depend neither upon
$\alpha_j^{(1)}$ nor $\alpha_{j}^{(2)}$. It is important to stress that $g(\alpha^{(1)}_j,\alpha^{(2)}_j)$ can be computed using the same circuit
of Fig.~\ref{fig:general_scheme_qcircuit}, by simply replacing the phases $\alpha_j$ of $\hat{V}_{QA}(\vec{\alpha})$ and $\hat{V}_{QA}^\dag(\vec{\alpha})$ with
$\alpha_j^{(1)}$ and $\alpha_j^{(2)}$ respectively. Notice finally that exploiting the identity Eq.~(\ref{eed}) we can write
\begin{eqnarray}
\label{eq:gradient_example_parameter_shift_alpha}
\frac{\partial C(\vec \alpha, \vec \beta) }{\partial \alpha_j} &=& \left. \frac{\partial g(\alpha^{(1)}_j,\alpha_j)}{\partial \alpha^{(1)}_j}\right|_{\alpha_j^{(1)}=\alpha_j}
+ \left.\frac{\partial g(\alpha_j,\alpha^{(2)}_j)}{\partial \alpha^{(2)}_j}\right|_{\alpha_j^{(2)}=\alpha_j}
\\
&=& g(\alpha_j + \tfrac{\pi}{4},\alpha_j) - g(\alpha_j -\tfrac{\pi}{4},\alpha_j) \nonumber \\
&+& g(\alpha_j,\alpha_j + \tfrac{\pi}{4}) - g(\alpha_j,\alpha_j -\tfrac{\pi}{4})\nonumber\;, \end{eqnarray} which shows that computing the gradient of $C(\vec \alpha, \vec \beta)$ with respect to $\alpha_j$ simply accounts to evaluate the circuit that express $g(\alpha_j^{(1)},\alpha_j^{(2)})$ for four distinct values of the parameters.
\subsection{Noise model}\label{sec:noise} The scheme presented so far can in principle be applied to arbitrary classes of noises. In our research however we focused on a specific model that has been extensively studied in the literature producing explicit examples of efficient QEC solutions which can be used as a theoretical benchmark for our variational search. Specifically we assume $Q$ and $A$ to be respectively a single qubit register ($k=1$) and a two qubit register ($n=3$), globally affected by a given species of single-qubit noise~\cite{gottesman_2009,Knill_2001}. These transformations can be represented in terms of a LCPT map of the form \begin{eqnarray} \label{noise} \Phi_{QA}(\cdots) = \sum_{\ell=0}^{n} \hat{K}^{(\ell)}_{QA} \cdots \hat{K}^{(\ell)\dag}_{QA}\;, \end{eqnarray} with Kraus operators~\cite{nielsen00} \begin{align} \label{kraus}
\hat{K}_{QA}^{(0)} := \sqrt{1 - p} \; \iid_{QA} \;, \qquad \hat{K}_{QA}^{(\ell)}:= \sqrt{\frac{p}{n}} \; \hat{\sigma}^{(\ell)}\;, \end{align} where for $\ell\in\{1,\cdots,n\}$, $\hat{\sigma}^{(\ell)}$ is the Pauli operator acting on the $\ell$-th qubit of $QA$ which defines the noise species we have selected. For instance, in the case we choose to describe phase-flip noise then $\hat{\sigma}^{(\ell)}=\hat{\sigma}^{(\ell)}_3$, while for describing bit-flip we have $\hat{\sigma}^{(\ell)}=\hat{\sigma}^{(\ell)}_1$. Explicit examples of $\hat{V}_{QA}$, $\hat{W}_{QAB}$ which allow for exact suppression of the noise ($\overline{F}{(V,W)}=1$) are shown in Fig.~\ref{FIGexact}. Notice that by construction the circuit parametrization of
$\hat{V}_{QA}(\vec \alpha), \hat{W}_{QAB}(\vec \beta)$ given in Fig.~\ref{fig:general_scheme_qcircuit} include such gates
as special solution: accordingly if properly guided by an efficient cost function, our numerical VQA search has a chance to
find the solution of Fig.~\ref{FIGexact}.
\begin{figure}
\caption{Circuital implementation of the noise element of Fig.~\ref{fig:general_scheme_qcircuit}: here $\hat{K}_{QA}^{(\ell)}$
are weighted unitaries of Eq.~(\ref{kraus}).
}
\label{fignoise}
\end{figure}
\begin{figure}
\caption{Circuital implementations of the ideal transformations
$\hat{V}_{QA}(\vec \alpha)$ (left) and $\hat{W}_{QAB}(\vec \beta)$ (right) which allow for exact noise suppression
of a single-qubit bit-flip noise model [i.e. (\ref{noise}) with $\hat{\sigma}^{(\ell)}=\hat{\sigma}^{(\ell)}_1$]
using a quantum register $B$ with $r=2$ qubit. Here $H$ represents Hadamard gates, while the control-element are C-NOT
gates.
}
\label{FIGexact}
\end{figure}
\section{Results}\label{sec:res}
\begin{figure}
\caption{Comparison of the the input-output average fidelity~(\ref{defFAV1})
attainable by running our optimization algorithm
using the cost function $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ (blue data) and $C^{(\rm wass)}(\vec \alpha, \vec \beta)$ (orange data).
Here the error model is a single-qubit bit-flip noise ($\hat{\sigma}=\hat{\sigma}_1$ in (\ref{noise})) with $p = 0.8$. The no error correction threshold (\ref{fdffs}) of this scheme is $\overline{F}_0 \approx 0.822$ -- orange peak in the fidelity plot, up to numerical precision. Only the runs that produced a fidelity of at least $\overline{F}_0$ have been included: for $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ this happens in the $0.2\%$ of the runs, while for $C^{(\rm wass)}(\vec \alpha, \vec \beta)$ for the $29.6\%$.}
\label{fig:qvector_trick_results_sigmax}
\end{figure}
In this section we study the impact of the cost function on the
efficiency of the optimization algorithm of Sec.~\ref{sec:general}. Assuming the single-qubit noise model detailed in Sec.~\ref{sec:noise} and taking $B$ to be a $r=2$ qubit register, we run two distinct numerical searches:
the first obtained by identifying $C(\vec \alpha, \vec \beta)$ with $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ and the second choosing instead
$C^{(\rm wass)}(\vec \alpha, \vec \beta)$. Results are reported in
\cref{fig:qvector_trick_results_sigmaz,fig:qvector_trick_results_sigmax} for two different choices of the noise models~(\ref{noise}), i.e. phase-flip and bit-flip.
For both, we compare the input-output average fidelity~(\ref{defFAV1}) at the end of the procedure obtained with the two
different cost functions, and the number of iterations $M$ needed for convergence. Regarding this last quantity we set a maximum value $M_{\max}$ equal to 2000 before convergence and we chose this limit mainly with practical choices like the maximum time for the simulation, enforcing that a single run does not require more than a few hours of computational time: in case the algorithm fails to reach the convergency we simply stop the numerical search (this is the reason for the peak at the end of the upper orange plot in \cref{fig:qvector_trick_results_sigmaz}). The plots report only the simulations that manage to achieve an average fidelity that is greater or equal than no-correction threshold bound $\overline{F}_0$.
The first thing to observe is that for both noise models, $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ has problem in reaching the do-nothing threshold $\overline{F}_0$: the probability of success being $2.6\%$ for the phase-flip case of~\cref{fig:qvector_trick_results_sigmaz} and only $0.2\%$ for the bit-flip case of~\cref{fig:qvector_trick_results_sigmax} (for both noise models the total number of simulations analyzed was 500). Observe also that in this last case the algorithm never yields average input-output fidelity values strictly larger than $\overline{F}_0$ and that, even in those cases, it requires a number $M$ of iterations which saturate the maximum allowed value $M_{\max}$ (blue peak in the upper plot of \cref{fig:qvector_trick_results_sigmaz}). $C^{(\rm was)}(\vec \alpha, \vec \beta)$ performs definitely better: to begin with it succeeds in overcoming the threshold $\overline{F}_0$ in one third of the simulations (specifically
$40.6\%$ for the phase-flip noise model and $29.6\%$ for the bit-flip noise model). Furthermore, the algorithm reaches convergency
with a number of iterations that are typically smaller than those required by $C^{(\rm fid)}(\vec \alpha, \vec \beta)$.
To better enlighten the differences between the two cost functions, we proceeded with further simulations, whose results are summarized in \cref{fig:performance_start_differed}. The idea here is to run a two-step optimization process composed by two sequences of runs: in the first run we start the optimization procedure from a random point in the parameter space
$(\vec \alpha, \vec \beta)$ with one of the two cost functions (say $C^{(\rm fid)}(\vec \alpha, \vec \beta)$), up to convergence; after that we start a second optimization run using
the other cost function (say $C^{(\rm wass)}(\vec \alpha, \vec \beta)$) but assuming as initial condition for the parameters the final point reached by the first run. The
plots report the difference in fidelity between the second and the first run: when we start using the $C^{(\rm wass)}(\vec \alpha, \vec \beta)$ in the first run, the fidelity cannot further improve the result that is already found, and this is represented by the fact that the best improvement is of the order of $10^{-5}$; on the contrary, if we started employing $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ in the first run, the use of
$C^{(\rm wass)}(\vec \alpha, \vec \beta)$ in the second run typically yields substantial improvements of the performance\footnote{It has to be said that in few cases the figure of merit is worse after the second optimization -- see the negative bar in right panel of~\cref{fig:performance_start_differed}. This is due to the fact that when using $C^{(\rm wass)}(\vec \alpha, \vec \beta)$ we are not maximizing the fidelity but minimizing a function whose stationary point corresponds to the maximum of the latter: accordingly the final point of convergence for $C^{(\rm wass)}(\vec \alpha, \vec \beta)$ can be slightly off mark in terms of fidelity. This is not a problem because these two functions do not have a constant ratio, and we checked that the inequalities between them are still satisfied.}. Moreover, we sampled some single descent processes and plotted the cost in function of the iteration. When we move from fidelity to $W_1$, the descent part after the change of cost function is qualitatively indistinguishable from starting from a random point.
\begin{figure}
\caption{Comparison of the the input-output average fidelity~(\ref{defFAV1})
attainable by running our optimization algorithm
using the cost function $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ (blue data) and $C^{(\rm wass)}(\vec \alpha, \vec \beta)$ (orange data).
Here the error model is a single-qubit phase-flip noise ($\hat{\sigma}=\hat{\sigma}_3$ in (\ref{noise}) with $p = 0.8$. The no error correction threshold (\ref{fdffs}) of this scheme is $\overline{F}_0 \approx 0.822$ -- orange peak in the fidelity plot, up to numerical precision. Only the runs that produced a fidelity of at least $\overline{F}_0$ have been included: for $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ this corresponds to the $2.6\%$ of the runs, while for $C^{(\rm wass)}(\vec \alpha, \vec \beta)$ the success probability is $40.6\%$.}
\label{fig:qvector_trick_results_sigmaz}
\end{figure}
\onecolumngrid
\begin{figure}
\caption{Improvement of simulations when changing the cost function in a two run optimization process that uses different cost functions to drive the descent algorithm. In the left plot, we started the descent on a random initial point, ran the optimization using $C^{(\rm wass)}(\vec \alpha, \vec \beta)$ as cost function until convergence, and then we started the descent algorithm again but using $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ as cost function, starting from the final point of the previous descent. In the right part, the roles of the two cost functions are inverted (we start using $C^{(\rm fid)}(\vec \alpha, \vec \beta)$ and then we use $C^{(\rm wass)}(\vec \alpha, \vec \beta)$). The histograms represent the difference in average input-output fidelity~(\ref{defFAV}) after the change of cost function, namely the difference between the fidelity achieved after the second descent and the fidelity after the first descent (positive values correspond to improved performances).Please notice the scale difference on the $x$-axis between the left and right plot.}
\label{fig:performance_start_differed}
\end{figure} \twocolumngrid
\section{Conclusions}\label{sec:con}
To summarize, we have shown a variational quantum algorithm that allows finding the most suitable error correction procedure for a specific noise on quantum hardware. We compared the performance of two different versions of this algorithm using two different cost functions, the fidelity and an approximation of the quantum Wasserstein distance of order one. We compared the difference in speed and the ability to obtain a useful solution between the two algorithms, finding really different trends between the two optimization procedures. The optimization process based on the fidelity suffers greatly from the phenomenon of the barren plateaus, leading to very slow convergence or no convergence at all, while the algorithm based on the quantum $W_1$ distance allows us to find the configurations that correct the errors in the examples that we explored. The obtained results show a clear improvement and allow us to explore further improvements of these methods, as using different algorithms for the minimization process, e.g. stochastic gradient descent or higher-order algorithms like Newton or pseudo-Newton algorithms.
Given that the gradient can be expressed only with the cost function evaluated in a small number of circuits that differ only in the parameter choice, the gradient of the cost function can be computed on the same hardware that will be used for the correction procedure. Moreover, simulating this circuit may be difficult because of the exponential scaling of the dimension of the Hilbert space of a set of qubits, but this problem does not apply when all the circuit is built on hardware, gaining a quantum advantage. For the same reason, the same procedure can be iterated to compute the exact Hessian of the cost function and then apply a second-order method like the Newton method as a descent algorithm. However, this has not been done because the circuits that we marked as useful have a relatively big number of parameters, and computing the hessian scales quadratically with this number, leading to intractable computations.
\subsection*{Acknowledgments}
FZ and VG acknowledge financial support by MIUR (Ministero dell’ Istruzione, dell’ Universit\`a della Ricerca) by PRIN 2017 Taming complexity via Quantum Strategies: a Hybrid Integrated Photonic approach (QUSHIP) Id. 2017SRN-BRK, and via project PRO3 Quantum Pathfinder. GDP is a member of the ``Gruppo Nazionale per la Fisica Matematica (GNFM)'' of the ``Istituto Nazionale di Alta Matematica ``Francesco Severi'' (INdAM)''. GDP has been supported by the HPC National Centre for HPC, Big Data and Quantum Computing – Proposal code CN00000013, CUP J33C22001170001, funded within PNRR - Mission 4 - Component 2 Investment 1.4.\ SL was funded by ARO and by DARPA. MM is supported by the NSF Grants No. CCF-1954960 and CCF-2237356.
\end{document} | arXiv |
\begin{document}
\title[Shock waves and rarefaction waves under periodic perturbations]{Asymptotic stability of shock waves and rarefaction waves under periodic perturbations for 1-D convex scalar conservation laws}
\author[Z. Xin]{Zhouping XIN} \thanks{This research is partially supported by Zheng Ge Ru Foundation, Hong Kong RGC Earmarked Research Grants, CUHK-14300917, CUHK-14305315, and CUHK4048/13P, and NSFC/RGC Joint Research Grant N-CUHK 443-14. } \address[Z. Xin]{The Institute of Mathematical Sciences \& Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong} \email{[email protected]}
\author[Q. Yuan]{Qian YUAN} \address[Q. Yuan]{The Institute of Mathematical Sciences \& Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong} \email{[email protected]}
\author[Y. Yuan]{Yuan YUAN} \address[Y. Yuan]{South China Research Center for Applied Mathematics and Interdisciplinary Studies, South China Normal University, Guangzhou, Guangdong, China} \email{[email protected]}
\subjclass[2010]{35L03, 35L65, 35L67} \keywords{conservation laws, shock waves, rarefaction waves, periodic perturbations}
\begin{abstract}
In this paper we study large time behaviors toward shock waves and rarefaction waves under periodic perturbations for 1-D convex scalar conservation laws.
The asymptotic stabilities and decay rates of shock waves and rarefaction waves under periodic perturbations are proved.
\end{abstract}
\maketitle
\section{Introduction} We consider the Cauchy problem for convex scalar conservation laws in one-dimensional case, \begin{equation}\label{equ1}
\partial_t u(x,t)+\partial_x f(u(x,t)) =0, \quad x \in (-\infty, +\infty),\quad t>0, \end{equation} \begin{equation}\label{ic1}
u|_{t=0}=
\begin{cases}
\overline{u}_l+w_0(x)& \text{if~} x<0,\\
\overline{u}_r+w_0(x)& \text{if~} x>0,
\end{cases} \end{equation} where $f(u) \in C^2(\mathbb{R}) \text{ satisfies } f''(u)>0$, $\overline{u}_l$ and $\overline{u}_r$ are two distinct constants, $w_0(x) \in L^{\infty}(\mathbb{R})$ is any periodic function with period $p>0$, and $\overline{w}$ is its average \begin{equation}\label{avg} \overline{w}\triangleq \frac{1}{p} \int_0^p w_0(x) dx. \end{equation} When $w_0(x) \equiv 0$, the problem is Riemann problem, and its entropy solutions are shock waves if $\overline{u}_l>\overline{u}_r$ or rarefaction waves if $\overline{u}_l<\overline{u}_r$. In this paper, we plan to study the asymptotic stabilities of the solutions to \eqref{equ1} and \eqref{ic1} with bounded periodic perturbation $ w_0(x) $.
The theory of convex scalar conservation laws is one of the most classical theory in PDE, and far-reaching results have been obtained. As is well known, for any $L^{\infty}$ initial data, there exists a unique entropy solution to \eqref{equ1} in $Lip((0,+\infty),L^1_{loc}(\mathbb{R}))$ (\cite[Theorem~16.1]{smo}).
For the study of large time behaviors of entropy solutions to \eqref{equ1}, when initial data is in $L^\infty \cap L^1 $, the entropy solution decays to $0$ in $L^\infty$ norm at a rate $t^{-1}$. When initial data is bounded and has compact support, the entropy solution decays to the N-wave in $L^1$ norm at a rate $t^{-\frac{1}{2}}$, see \cite{hopf} and \cite{Lax}. While for the periodic initial data, which is obviously not in $L^1$, Glimm J. and P. Lax \cite[Theorem 5.2]{GD} seem to be the first to state that the entropy solutions decay to their average at a rate $t^{-1}$.
It is well known that shock waves and rarefaction waves are two important and typical entropy solutions in genuinely nonlinear conservation laws , and their stability problems are of great interest not only in mathematics but also in physics. If the initial perturbation is compactly supported, Liu in \cite{Liu1978} proved that for shock waves, the perturbed solution becomes a translation of shock waves after a finite time; while for rarefaction waves, we only have that the perturbed solution converges to the centered rarefaction waves at a rate $t^{-\frac{1}{2}}$ in $L^{\infty}$ norm.
However, when the perturbation remains oscillating at the infinity, like the periodic one, the stability of these simple waves are still open. Here the initial data \eqref{ic1} is neither integrable nor periodic on $\mathbb{R}$.
In this paper, we prove that for any given bounded periodic perturbation, the shock and rarefaction wave are both asymptotically stable. More precisely, we show that for shock waves, after a finite time, the perturbed shock consists of actually two periodic functions contacting with each other at a shock curve, and this shock curve tends to the background one at a rate $t^{-1}$, see Theorem \ref{thmshock}; while for perturbation of a rarefaction wave data, the solution consists of three parts separated by two distinct characteristics, where on two sides the solution is periodic, and the perturbed rarefaction wave tends to the background one in $L^{\infty}$ norm at a rate $t^{-1}$, see Theorem \ref{thmrare}. The stability result for shock profiles under periodic perturbations for viscous case will be shown in a forthcoming paper. Furthermore, we give a more exquisite convergent rate for the problem \eqref{equ1} with periodic initial data than Glimm and Lax \cite[Theorem 5.2]{GD}, see Theorem \ref{thmper}, and we also give a simple example \eqref{2constants} to show that this rate is optimal in some sense, see \eqref{lb1}.
To prove the main results stated above, we make much use of some properties of generalized characteristics of convex scalar convex conservation laws (see Proposition \ref{propgen}, Lemma \ref{lemgre}) and the existence of divides of periodic solutions(see Lemma \ref{propdiper}), which is a very special feature of periodic solutions and plays a essential role in our proof. Such concepts and tools were developed by Dafermos in \cite{Dafe1}, \cite{Dafe}.
At last, before the end of our paper, we present an alternative proof, inspired by Hopf-Cole transform in \cite{hopf}, to prove \eqref{glue} in Theorem \ref{thmshock} when $f(u)=u^2/2$.
\section{Statement of main results} Before stating the main results of this paper, we firstly list the following result, which can be derived from \cite[Theorem 5.2]{GD} or \cite[Theorem 3.1]{Dafe1}, \begin{Thm*}
Suppose that $u_0\in L^{\infty}$ is a periodic function of period $p$ with its average $\overline{u}=\frac{1}{p}\int_0^p u_0(x)~dx$. Then for any $t>0$, the entropy solution $u(x,t)$ to \eqref{equ1} with initial data $u_0(x)$ is also periodic of period $p$ with the same average $\overline{u}$, and also
\begin{equation}\label{lb13}
|u(x,t)-\overline{u}| \leq \frac{C}{t}, \quad \forall~ t>0,
\end{equation}
where $C$ depends on $ p, \overline{u}, f$. \end{Thm*}
\begin{Rem}
For the asymptotic behavior of periodic solutions, after Glimm and Lax's result, Dafermos \cite[Theorem 3.1]{Dafe1} gave a more exquisite description of the asymptotic behavior, which behaves like a saw-toothed profile. In this paper we can give an optimal bound of $ \|u-\overline{u}\|_{L^\infty} $ for the periodic solutions. This bound is more accurate than the results of Glimm and Lax, and it is optimal because it can be achieved for some special initial data. Since this result is not related with the stability problems of shock and rarefaction waves, we place the corresponding theorem and proof in the appendix. \end{Rem}
In \eqref{ic1} we can assume that perturbation $w_0(x)$ has zero average \begin{equation}\label{ave0} \overline{w}\triangleq \frac{1}{p} \int_0^p w_0(x) dx =0 \end{equation} by replacing $\overline{u}_l, \overline{u}_r$ with $\overline{u}_l+\overline{w}, \overline{u}_r+\overline{w}$ respectively if necessary.
For $\overline{u}_l>\overline{u}_r$, the shock wave, $u^S$ is given by, \begin{equation}\label{shock} u^S(x,t)=\begin{cases} \overline{u}_l, & \text{~ if~ }x<st;\\ \overline{u}_r, & \text{~ if~ }x>st. \end{cases} \quad \quad \text{~where } \quad s=\frac{f(\overline{u}_l)-f(\overline{u}_r)}{\overline{u}_l-\overline{u}_r}. \end{equation} and for $\overline{u}_l<\overline{u}_r$, the rarefaction wave, $u^R$ is, \begin{equation}\label{rare} u^R(x,t)\triangleq \begin{cases} \overline{u}_l, &\text{~if~ } \frac{x}{t} < f'(\overline{u}_l);\\ (f')^{-1}(\frac{x}{t}), &\text{~if~} f'(\overline{u}_l)\leq \frac{x}{t}\leq f'(\overline{u}_r);\\ \overline{u}_r, &\text{~if~} \frac{x}{t}> f'(\overline{u}_r); \end{cases} \end{equation}
In the rest of this paper, we will use the following notations to represent different entropy solutions to problem \eqref{equ1} with different initial data, \begin{equation}\label{dsol}
\begin{aligned}
w(x,t): &\text{~the entropy solution to \eqref{equ1} with } w(x,0)=w_0(x); \\
u_l(x,t): &\text{~the entropy solution to \eqref{equ1} with } u_l(x,0)=\overline{u}_l+w_0(x);\\
u_r(x,t): &\text{~the entropy solution to \eqref{equ1} with } u_r(x,0)=\overline{u}_r+w_0(x).
\end{aligned} \end{equation}
Then by \eqref{lb13}, one has \begin{equation}\label{lb2}
|u_l(x,t)-\overline{u}_l| \leq \frac{C}{t}, \quad |u_r(x,t)-\overline{u}_r| \leq \frac{C}{t}, \quad \text{for~} \forall~ t>0, ~a.e.~x. \end{equation}
Also we define the following extremal forward generalized characteristics (see definitions in Definition \ref{defgen} and Proposition \ref{propgen}) issuing from the origin $(0,0)$: \begin{equation}\label{dX} \begin{aligned} X_-(t): \quad &\text{the minimal generalized characteristic associated with}~u\\ X_+(t): \quad &\text{the maximal generalized characteristic associated with}~u\\ X_{r-}(t): \quad &\text{the minimal generalized characteristic associated with}~u_r\\ X_{l+}(t): \quad &\text{the maximal generalized characteristic associated with}~u_l \end{aligned} \end{equation}
Then the main results of this paper are stated as follows:
\begin{Thm} \label{thmshock}
Suppose that $\overline{u}_l>\overline{u}_r$.
Then for any periodic perturbation $w_0(x) \in L^{\infty}(\mathbb{R}) $ satisfying \eqref{ave0}, there exist a finite time $T_S>0$, and a unique curve $X(t) \in \text{Lip}~(T_S,+\infty)$, which is actually a shock, such that for any $t>T_S$,
\begin{equation} \label{glue}
u(x,t)=\begin{cases}
u_l(x,t),\quad \text{if~} x<X(t),\\
u_r(x,t),\quad \text{if~} x>X(t).
\end{cases}
\end{equation}
Moreover,
\begin{equation}\label{lb}
\sup_{x<X(t)} |u(x,t)-\overline{u}_l| +\sup_{x>X(t)} |u(x,t)-\overline{u}_r| + |X(t)-st| \leq \frac{C}{t}, ~\forall~ t>T_S,
\end{equation}
Here $C$ and $T_S$ depend on $p, \overline{u}_l, \overline{u}_r, f$. \end{Thm}
\begin{Thm}\label{thmrare}
Suppose that $\overline{u}_l<\overline{u}_r$.
Then for any periodic perturbation $w_0(x) \in L^{\infty}(\mathbb{R})$
satisfying \eqref{ave0},
and for any $t>0$,
\begin{equation}
|u(x,t)-u^R(x,t)| \leq \frac{C}{t}, \quad a.e. ~ x\in \mathbb{R}.
\end{equation}
where $C$ depends on $p, \overline{u}_l, \overline{u}_r, f$. \end{Thm}
\begin{Thm}\label{thmshock2}
Suppose that the assumptions of Theorem \ref{thmshock} hold, and additionally $f(u)=u^2/2$, i.e., \eqref{equ1} is the Burger's equation. Then $$\text{when~ } t>T_S \text{~and~ } \dfrac{(\overline{u}_l-\overline{u}_r)t}{p} \text{~is an integer},\quad X(t)=st.$$ \end{Thm}
\begin{Thm}\label{thmrare2}
Suppose that the assumptions of Theorem \ref{thmrare} hold, and additionally $w_0$ satisfies
\begin{equation*}
\int_0^x w_0(y)~dy \geq 0, \quad 0 \leq x\leq p,
\end{equation*}
then \begin{equation*}
u(x,t)=\begin{cases}
u_l(x,t), & \text{if~} \frac{x}{t}<f'(\overline{u}_l),\\
(f')^{-1}(\frac{x}{t})=u^R(x,t), & \text{if~} f'(\overline{u}_l)\leq \frac{x}{t}\leq f'(\overline{u}_r);\\
u_r(x,t), & \text{if~} \frac{x}{t}>f'(\overline{u}_r).
\end{cases}
\end{equation*} \end{Thm}
This paper proceeds as follows: In Section 3, we present some well-known results on generalized characteristics, especially the divides, which can be found in Dafermos's book \cite{Dafe}, and we also obtain some propositions that will be frequently used; Theorem \ref{thmshock}-\ref{thmrare2} are proved in Section 4 and 5; in Section 6, for the special case $f(u)=\frac{u^2}{2}, $ i.e. Burger's equation, we give another proof inspired by the Hopf-Cole transform, to prove Theorem \ref{thmshock}; and Theorem \ref{thmper} and its proof are shown in Appendix A.
\section{Preliminary: generalized characteristics} Here we list some well-known results on generalized characteristics, which can be found in Chapter 10 and Chapter 11 in \cite{Dafe}.
\begin{Def}\label{defgen}
A generalized characteristic for \eqref{equ1}, associated with the entropy solution $u(x,t)$, on the time interval $[\sigma, \tau] \subset [0,+\infty)$, is a Lipschitz function \\ $\xi: [\sigma, \tau] \longrightarrow (-\infty, +\infty)$ which satisfies the differential inclusion
\begin{equation*}
\xi'(t) \in [f'(u(\xi(t)+,t)),f'(u(\xi(t)-,t))], \quad \text{ a.e. on} \quad [\sigma, \tau]
\end{equation*} \end{Def}
\begin{Prop}\label{propgen}
Assume $u(x,t) $ is the entropy solution to \eqref{equ1} with $L^{\infty} $ intial data $u_0$, then through any point $(\overline{x},\overline{t}) \in (-\infty,+\infty)\times [0,+\infty) $ pass two extremal generalized characteristics(which may not be distinct) defined on $[0,+\infty)$, namely the minimal $\xi_-(t)$ and the maximal $\xi_+(t)$ with $\xi_-(t)\leq \xi_+(t)$ for $t \in [0,+\infty) $.
And for any generalized characteristic $\xi(t)$ passing through $(\overline{x},\overline{t})$, there holds $\xi_-(t) \leq \xi(t) \leq \xi_+(t), \forall~t\geq 0$.
Furthermore, if $\overline{t}>0, $ then the minimal backward (confined in $0\leq t\leq \overline{t}$) characteristic $\xi_-(t)$ and maximal backward characteristic $\xi_+(t)$ are both straight lines, and satisfies for $0<t<\overline{t}$,
\begin{equation}\label{genpro}
\begin{aligned}
& u_0(\xi_-(0)-)\leq u(\xi_-(t)-,t)=u(\xi_-(t)+,t)=u(\overline{x}-,\overline{t}) \leq u_0(\xi_-(0)+);\\
& u_0(\xi_+(0)-)\leq u(\xi_+(t)-,t)=u(\xi_+(t)+,t)=u(\overline{x}+,\overline{t})\leq u_0(\xi_+(0)+);
\end{aligned}
\end{equation}
and the forward (confined in $t\geq \overline{t}$) characteristic is unique, i.e. for $t \geq \overline{t},$
\begin{equation}\label{foruni}
\xi_-(t)=\xi_+(t) \triangleq \xi(t).
\end{equation}
See Figure \ref{gc1} and Figure \ref{gc2}. \end{Prop}
\begin{figure}\label{gc1}
\end{figure}
\begin{figure}\label{gc2}
\end{figure}
\begin{Rem} \begin{enumerate}
\item For $\overline{t}>0, $ the minimal backward characteristic $\xi_-(t)$ and the maximal backward characteristic $\xi_+(t)$ coincide if and only if $u(\overline{x}-,\overline{t})=u(\overline{x}+,\overline{t})$.
\item For any two extremal forward generalized characteristic $\xi_-(t)$ and $\xi_+(t)$ issuing from $x-$axis, if they coincide at some time $t_0>0,$ then they remain the same for all $t>t_0. $ \end{enumerate} \end{Rem}
The following useful integral formula, \eqref{tri}, can be found in \cite{Dafe}.
\begin{Lem}\label{lemgre}
Let $\xi(t)$ and $\widetilde{\xi}(t)$ be two extremal backward characteristics corresponding to entropy solutions $u(x,t)$ and $\widetilde{u}(x,t)$ to \eqref{equ1} with $L^{\infty}$ initial data $u(x,0)$ and $ \widetilde{u}(x,0) $ respectively, emanating from a fixed point $(\overline{x},\overline{t}) \in (-\infty,+\infty)\times (0,+\infty)$, see Figure \ref{triFi}. Then if $ ~\widetilde{\xi}(0) <\xi(0) $, it holds that
\begin{align}
& \int_0^{\overline{t}} \{ f(b)-f(\widetilde{u}(\xi(t)-,t)) - f'(b)[b- \widetilde{u}(\xi(t)-,t)] \} ~dt \notag \\
& + \int_0^{\overline{t}} \{ f(\widetilde{b})-f(u(\widetilde{\xi}(t)+,t)) - f'(\widetilde{b})[\widetilde{b}-u(\widetilde{\xi}(t)+,t)] \} ~dt \label{tri}\\
& \quad \quad \quad \quad \quad =\int_{\widetilde{\xi}(0)}^{\xi(0)} [u(x,0)-\widetilde{u}(x,0)]~dx. \notag
\end{align}
where $b $ and $\widetilde{b}$ are constant defined by
\begin{align*}
b \triangleq
\begin{cases}
u(\overline{x}-,\overline{t}), \quad \text{if~ } \xi(t) ~\text{is minimal;}\\
u(\overline{x}+,\overline{t}), \quad \text{if~ } \xi(t) ~\text{is maximal.}
\end{cases}\\
\widetilde{b} \triangleq
\begin{cases}
\widetilde{u}(\overline{x}-,\overline{t}), \quad \text{if~ } \widetilde{\xi}(t) ~\text{is minimal;}\\
\widetilde{u}(\overline{x}+,\overline{t}), \quad \text{if~ } \widetilde{\xi}(t) ~\text{is maximal.}
\end{cases}
\end{align*} \end{Lem}
\begin{figure}\label{triFi}
\end{figure}
\begin{proof}[Proof of Lemma \ref{lemgre}]
By Proposition \ref{propgen}, it holds that $u(\xi(t),t)\equiv b $ and $\widetilde{u}(\widetilde{\xi}(t),t) \equiv \widetilde{b}$, for $0<t<\overline{t}$. Then integrating the equation $$ \partial_t (u-\widetilde{u}) +\partial_x(f(u)-f(\widetilde{u}))=0 $$ in the triangle with vertex $(\overline{x},\overline{t}), (\widetilde{\xi}(0),0), (\xi(0),0)$, and using Green's formula, one can get easily \eqref{tri}, for details, see \cite{Dafe}. \end{proof}
\begin{Def}[ \cite{Dafe} Definition~10.3.3]\label{defdivi}
A minimal (or maximal) divide, associated with the solution u, is a Lipschitz function $ \xi(t): [0, +\infty] \rightarrow \mathbb{R} $ such that $ \xi(t) = \lim_{m\rightarrow \infty} \xi_m(t), $ uniformly on compact time intervals, where $ \xi_m(\cdot) $ is the minimal (or maximal) backward characteristic emanating from a point $ (x_m, t_m), $ with $ t_m \rightarrow +\infty, $ as $ m \rightarrow \infty. $ \end{Def}
\begin{Prop}[ \cite{Dafe} Theorem~11.4.1]\label{propdivi}
Considering the Cauchy problem for \eqref{equ1} with any $L^{\infty}$ initial data $u_0(x). $ If there exists a constant $\overline{u} $ and a point $\overline{x} \in \mathbb{R} $, s.t.
\begin{equation}\label{divi}
\int_{\overline{x}}^x [u_0(y)- \overline{u}]~dy \geq 0, \quad -\infty <x< \infty,
\end{equation}
then there exists a divide associated with $ u $, issuing from the point $(\overline{x},0)$ of the $x$-axis, on which the entropy solution $u $ is constant $\overline{u}.$ \end{Prop}
In the following of this section, we give some conclusions derived from the knowledge above, which will be frequently used in this paper.
\begin{Prop}\label{propdiper}
Assume that the initial data $u_0(x) \in L^{\infty}$ is periodic of period $p$ with the average $\overline{u}=\frac{1}{p}\int_0^p u_0(x)~dx$. Then for each integer $N \in \mathbb{Z}$, the straight line
\begin{equation}\label{diper}
x=f'(\overline{u})t+\overline{x}+Np
\end{equation}
is a divide associated with the entropy solution $u(x,t)$ to \eqref{equ1}.\\
Here $\overline{x} $ is defined as some point in $[0,p)$, satisfying
\begin{equation}\label{minpoint}
\int_0^{\overline{x}} [u_0(y)-\overline{u}] dy = \min_{x \in [0,p]} \int_0^x [u_0(y)-\overline{u}] dy.
\end{equation} \end{Prop}
\begin{proof}[Proof of Proposition \ref{propdiper}]
The function $ \int_0^x [u_0(y)-\overline{u}]dy $ is continuous and it's easy to verify that this integral is periodic with period $p$ due to the conservation form of the equation \eqref{equ1}. Then there exists a point $\overline{x} \in [0,p), $ s.t. \eqref{minpoint} holds. And combining with $ \int_0^p [u_0(y)-\overline{u}]dy =0 $, it's easy to verify that $\int_{\overline{x}}^x [u_0(y)-\overline{u}]~dy \geq 0, ~ -\infty <x< \infty.$
Then for any $N \in \mathbb{Z}, $
\begin{equation}\label{geq0}
\int_{\overline{x}+Np}^x [u_0(y)-\overline{u}] ~dy=\int_{\overline{x}}^x [u_0(y)-\overline{u}] ~dy \geq 0, \quad -\infty <x< \infty.
\end{equation}
So by Proposition \ref{propdivi} and \eqref{geq0}, \eqref{diper} is a divide for $u(x,t)$. \end{proof}
For the periodic perturbation $ w_0(x), $ where $w_0$ satisfies \eqref{ave0}, one can choose a point $ a \in [0,p), $ such that \begin{equation}\label{defa}
\int_0^a [w_0(y)-\overline{w}] dy = \min_{x \in [0,p]} \int_0^x [w_0(y)-\overline{w}] dy. \end{equation}
Then by Proposition \ref{propdiper} and \eqref{defa}, it's easy to verify \begin{Cor}\label{corgamma}
For the entropy solution $ u_l(x,t)~ (resp.~ u_r(x,t))$ to \eqref{equ1} with initial data $u_l(x,0)=\overline{u}_l+w_0(x)~ (resp. ~u_r(x,0)=\overline{u}_r+w_0(x))$, and each $N \in \mathbb{Z}, $ the straight lines
\begin{equation}\label{dilr}
x=\Gamma_l^N(t)\triangleq a+Np+f'(\overline{u}_l)t,~\Big( resp. ~x=\Gamma_r^N(t)\triangleq a+Np+f'(\overline{u}_r)t \Big)
\end{equation}
are divides associated with $u_l$ (resp. $u_r$) on which $u_l(x,t)\equiv \overline{u}_l~ ( resp. ~u_r(x,t)\equiv \overline{u}_r ).$ \end{Cor}
By Lemma \ref{lemgre}, one can prove \begin{Lem}\label{lemglue}
Let $c_1>c_2$ be two constants and $u_0(x)\in L^{\infty}(\mathbb{R})$, and let $u_1(x,t), u_2(x,t), u_{12}(x,t)$ be the entropy solutions to \eqref{equ1} with their corresponding initial data
\begin{align*}
& u_1(x,t=0)=c_1+u_0(x),\\
& u_2(x,t=0)=c_2+u_0(x),\\
& u_{12}(x,t=0)=
\begin{cases}
c_1+u_0(x), \quad \text{if ~} x<0,\\
c_2+u_0(x), \quad \text{if ~} x>0.
\end{cases}
\end{align*}
Let $x_-(t), x_+(t)$ be the minimal and maximal forward generalized characteristics issuing from the origin associated with $u_{12}$ (see Figure \ref{ufig}, note that $x_-$ and $x_+$ may not be distinct), then
\begin{equation}
u_{12}(x,t) =
\begin{cases}
u_1(x,t), \quad \text{if ~} x<x_+(t),\\
u_2(x,t), \quad \text{if ~} x>x_-(t).
\end{cases}
\end{equation} \end{Lem}
\begin{proof}[Proof of Lemma \ref{lemglue}]
Without loss of generality, we prove only the case when $x<x_+(t)$.
For any fixed $(\overline{x},\overline{t}) $ with $\overline{x}<x_+(\overline{t}),~ \overline{t}>0$, we firstly prove that $u_{12}(\overline{x}+,\overline{t})=u_1(\overline{x}+,\overline{t})$.
Through $(\overline{x},\overline{t}) $ we draw the maximal backward characteristics $\xi_+(t)$ and $\eta_+(t)$ corresponding to the entropy solutions $u_{12}$ and $u_1$ respectively. By Proposition \ref{propgen}, $\xi_+(t)$ and $\eta_+(t)$ are both straight lines, and for $0<t<\overline{t}$,
$$ u_{12}(\xi_+(t)+,t)=u_{12}(\xi_+(t)-,t)=u_{12}(\overline{x}+,\overline{t}),\quad \xi_+'(t)=f'(u(\overline{x}+,\overline{t})) $$
$$u_1(\eta_+(t)+,t)=u_1(\eta_+(t)-,t)=u_1(\overline{x}+,\overline{t}),\quad \eta_+'(t)=f'(u_1(\overline{x}+,\overline{t})) $$
Then $\xi_+(0)\leq 0$ since $\xi_+(t)$ cannot cross through another generalized characteristic $x_+(t)$ at $t>0$ (since the forward characteristic issuing from any point $(x,t)$ with $t>0$ is unique, by Proposition \ref{propgen}). See Figure \ref{ufig}.
\begin{figure}\label{ufig}
\end{figure}
If $u_{12}(\overline{x}+,\overline{t})> u_1(\overline{x}+,\overline{t})$, then $\xi_+(0)<\eta_+(0)$. using \eqref{tri} with $u=u_1, \widetilde{u}=u_{12}, \xi=\eta_+, \widetilde{\xi}=\xi_+$, one can have that
\begin{align}
& \int_0^{\overline{t}} \{ f(b)-f(u_{12}(\eta_+(t)-,t)) - f'(b)[b- u_{12}(\eta_+(t)-,t)] \} ~dt \notag \\
& + \int_0^{\overline{t}} \{ f(\widetilde{b})-f(u_1(\xi_+(t)+,t)) - f'(\widetilde{b})[\widetilde{b}-u_1(\xi_+(t)+,t)] \} ~dt \label{tril} \\
& =\int_{\xi_+(0)}^{\eta_+(0)} (u_1(x,0)-u_{12}(x,0)) ~dx \notag \\
& =\begin{cases}
0,\quad \quad \quad \quad \quad \quad \quad\quad\quad\quad \text{~if ~} \eta_+(0)\leq 0;\\
\int_0^{\eta_+(0)} (c_1-c_2) ~dx >0, \quad\text{~if ~} \eta_+(0)> 0
\end{cases} \quad\quad\quad \geq 0. \notag
\end{align}
here $b=u_1(\overline{x}+,\overline{t}), \widetilde{b}=u_{12}(\overline{x}+,\overline{t}).$
By the strict convexity of $f$, the left hand side of \eqref{tril} is non-positive, and we have that for $t \in [0,\overline{t}]$,
\begin{align}
& u_{12}(\eta_+(t)-,t) \equiv b= u_1(\overline{x}+,\overline{t}),\label{for1} \\
& u_1(\xi_+(t)+,t) \equiv \widetilde{b}=u_{12}(\overline{x}+,\overline{t}).\label{for2}
\end{align}
Then \eqref{for1} implies that
$$ \eta_+'(t)=f'(u_1(\overline{x}+,\overline{t}))=f'(u_{12}(\eta_+(t)-,t))$$
which means that $\eta_+(t)$ is a backward generalized characteristic through $(\overline{x},\overline{t})$ associated with $u$. However, $\xi_+(t)$ is the maximal backward characteristic associated with $u$, thus there must hold $\eta_+(t) \leq \xi_+(t), t\in [0,\overline{t}]$, which contradicts with $\xi_+(0)<\eta_+(0)$.
Similarly, if $ u_{12}(\overline{x}+,\overline{t})< u_1(\overline{x}+,\overline{t}) $, it means $ \eta_+(0)<\xi_+(0)\leq 0 $. Then same argument as above can verify that for $ t \in [0,\overline{t}], u_1(\xi_+(t)-,t) \equiv u_{12}(\overline{x}+,\overline{t}) $, which implies that $ \xi_+'(t)=f'(u_1(\xi_+(t)-,t)) $. It means that $\xi_+(t)$ is a backward generalized characteristic associated with $u_1$, then there holds $ \xi_+(t)\leq \eta_+(t), t\in [0,\overline{t}] $, which is also a contradiction.
And the proof to $ u_{12}(\overline{x}-,\overline{t})=u_1(\overline{x}-,\overline{t}) $ is similar. Since one can draw the minimal backward characteristics $\xi_-(t)$ and $\eta_-(t)$ corresponding to $u_{12}$ and $u_1$ respectively, and then use the similar argument as above. \end{proof}
By Lemma \ref{lemglue}, one can easily prove
\begin{Prop}\label{propglue} The following properties hold:
\begin{enumerate}
\item[(i).] If $\overline{u}_l>\overline{u}_r$, then
\begin{equation*}
u(x,t)=
\begin{cases}
u_l(x,t), &\text{~if~} x<X_+(t),\\
u_r(x,t), &\text{~if~} x>X_-(t).
\end{cases}
\end{equation*}
\item[(ii).] If $\overline{u}_l<\overline{u}_r$, then
\begin{equation*}
u(x,t)=
\begin{cases}
u_l(x,t), &\text{~if~} x<X_{l+}(t),\\
u_r(x,t), &\text{~if~} x>X_{r-}(t).
\end{cases}
\end{equation*}
\end{enumerate}
Here $X_{l+}, X_{r-}, X_{\pm} $ are defined in \eqref{dX}. \end{Prop} \begin{proof}[Proof of Proposition \ref{propglue}] $ $\\
\begin{enumerate}
\item[(i).] Note that $\overline{u}_l>\overline{u}_r$. Thus one needs to take $ u_{12}=u, u_1=u_l, u_2=u_r$ in Lemma \ref{lemglue}, and then (1) follows easily.\\
\item[(ii).] Note that $\overline{u}_l<\overline{u}_r$ and
\begin{equation*}
\begin{aligned}
& u_l(x,t=0)=\begin{cases}
u(x,t=0) & \text{if}~ x<0,\\
\overline{u}_l-\overline{u}_r+u(x,t=0) & \text{if}~ x>0,\\
\end{cases}\\
& u_r(x,t=0)=\begin{cases}
\overline{u}_r-\overline{u}_l+u(x,t=0) & \text{if}~ x<0,\\
u(x,t=0) & \text{if}~ x>0.\\
\end{cases}
\end{aligned}
\end{equation*}
Therefore, by taking $u_{12}=u_l, u_1=u$ and $u_{12}=u_r, u_2=u$ respectively in Lemma \ref{lemglue}, one can prove (2).
\end{enumerate} \end{proof}
\section{Proof of Theorem \ref{thmshock} and \ref{thmshock2}} In this section we prove Theorem \ref{thmshock} and \ref{thmshock2}.
\begin{proof}[Proof of Theorem \ref{thmshock}] By Proposition \ref{propglue}, if $X_-(t) \equiv X_+(t)$ for $t \in [0,+\infty)$, then \eqref{glue} holds immediately.
If $X_-(t)<X_+(t)$, then $u_l(x,t)$ coincides with $u_r(x,t)$ for $X_-(t)<x<X_+(t)$. But by \eqref{lb2}, after a finite time $T_S$, say $\dfrac{2C}{T_S} = \overline{u}_l-\overline{u}_r $, here $ C $ is the constant in \eqref{lb2}, it holds that $$ u_l(x,t)>u_r(x,t), \quad \text{~for~} ~t>T_S,~ -\infty<x<+\infty, $$ which means that for $t>T_S$, $X_-(t) \equiv X_+(t)$ must hold. Thus \eqref{glue} is proved, then it remains to prove \eqref{lb}.
For $N>0$, we define the trapezium:
\begin{equation}\label{tra}
\Omega^N(T) \triangleq \{(x,t): 0\leq t\leq T,~ \Gamma_l^{-N}(t) \leq x \leq \Gamma_r^N(t)~\}
\end{equation}
For each $T>T_S, $ one can choose $N>0 $ large enough, s.t.
\begin{equation}\label{Ga}
\Gamma_l^{-N}(t) <X_-(t), \quad \Gamma_r^N(t) >X_+(t), \quad \text{for all~} 0\leq t\leq T,
\end{equation}
see Figure \ref{green}.\\
\begin{figure}\label{green}
\end{figure}
Then applying the Green Formula in $\Omega^N(T) $ yields that
\begin{align*}
0=& \int_{\Omega^N(T)} \Big( \partial_t u +\partial_x f(u) \Big)~dxdt = \int_{\Gamma_l^{-N}(T)}^{\Gamma_r^N(T)} u(x,T) ~dx \\
& -\int_{\Gamma_l^{-N}(0)}^{\Gamma_r^N(0)} u(x,0)~dx - \int_0^T \Big\{ f'(\overline{u}_r)~u_r(\Gamma_r^N(t),t)-f\Big(u_r(\Gamma_r^N(t),t)\Big) \Big\}~dt \\
& +\int_0^T \Big\{ f'(\overline{u}_l)~u_l(\Gamma_l^{-N}(t),t)-f\Big(u_l(\Gamma_l^{-N}(t),t)\Big) \Big\}~dt \triangleq I_1+I_2+I_3+I_4.
\end{align*}
It follows from \eqref{glue} that
\begin{align}
I_1= & \int_{\Gamma_l^{-N}(T)}^{X(T)} u_l(x,T)~dx +\int_{X(T)}^{\Gamma_r^N(T)} u_r(x,T)~dx \label{I1} \\
= & \int_{\Gamma_l^{-N}(T)}^{X(T)} \Big(u_l(x,T)-\overline{u}_l\Big)~dx +\int_{X(T)}^{\Gamma_r^N(T)} \Big(u_r(x,T)-\overline{u}_r \Big)~dx \notag\\
& +(X(T)-\Gamma_l^{-N}(T))\overline{u}_l+ (\Gamma_r^N(T)-X(T))\overline{u}_r \notag
\end{align}
Note that
\begin{equation*}
\int_x^{x+p} \Big(u_l(y,t)-\overline{u}_l\Big)~dy=\int_x^{x+p} \Big(u_r(y,t)-\overline{u}_r\Big)~dy=0, \quad \forall~ x \in \mathbb{R}.
\end{equation*}
So by \eqref{lb2}, the first two terms in $I_1$ satisfy
\begin{equation}\label{leq}
\Big|\int_{\Gamma_l^{-N}(T)}^{X(T)} \Big(u_l(x,T)-\overline{u}_l\Big)~dx +\int_{X(T)}^{\Gamma_r^N(T)} \Big(u_r(x,T)-\overline{u}_r \Big)~dx \Big|\leq \frac{C}{T}.
\end{equation}
By \eqref{ave0}, $\Gamma_l^{-N}(0)=a-Np$ and $\Gamma_r^N(0)=a+Np, $ one has
\begin{align}
I_2=& -\int_{\Gamma_l^{-N}(0)}^0 \Big( w_0(x)+\overline{u}_l \Big)~dx - \int_0^{\Gamma_r^N(0)} \Big( w_0(x)+\overline{u}_r \Big) ~dx \label{I2} \\
=& (a-Np)\overline{u}_l-(a+Np)\overline{u}_r \notag
\end{align}
And Corollary \ref{corgamma} implies that
\begin{align}
I_3+I_4=& - \int_0^T \Big\{ f'(\overline{u}_r)~u_r(\Gamma_r^N(t),t)-f\Big(u_r(\Gamma_r^N(t),t)\Big) \Big\}~dt \label{I34}\\
&+\int_0^T \Big\{ f'(\overline{u}_l)~u_l(\Gamma_l^{-N}(t),t)-f\Big(u_l(\Gamma_l^{-N}(t),t)\Big) \Big\}~dt \notag\\
=& - \int_0^T \Big( f'(\overline{u}_r)\overline{u}_r-f(\overline{u}_r) \Big)~dt +\int_0^T \Big( f'(\overline{u}_l)\overline{u}_l-f(\overline{u}_l) \Big)~dt \notag\\
=& \Big\{ f'(\overline{u}_l)\overline{u}_l-f'(\overline{u}_r)\overline{u}_r-\Big(f(\overline{u}_l)-f(\overline{u}_r)\Big) \Big\}T. \notag
\end{align}
Thus by \eqref{I1}, \eqref{I2}, \eqref{I34}, and noting that $\Gamma_l^{-N}(T)= a-Np+f'(\overline{u}_l)T,~ \Gamma_r^N(T)= a+Np+f'(\overline{u}_r)T $, one has
\begin{align}
& X(T)- sT \label{Xfor}\\
& = \frac{-1}{\overline{u}_l-\overline{u}_r} \Big\{ \int_{\Gamma_l^{-N}(T)}^{X(T)} \Big(u_l(x,T)-\overline{u}_l\Big)~dx
+\int_{X(T)}^{\Gamma_r^N(T)} \Big(u_r(x,T)-\overline{u}_r \Big)~dx \Big\}.\notag
\end{align}
Then by \eqref{leq} and \eqref{Xfor}, it holds that for $T>T_S, $
$$ \Big| X(T)-sT| \leq \frac{C}{T}. $$
Finally, one has
\begin{align*}
& \sup_{x<X(t)} |u(x,t)-\overline{u}_l| +\sup_{x>X(t)} |u(x,t)-\overline{u}_r| + |X(t)-st| \\
=& \sup_{x<X(t)} |u_l(x,t)-\overline{u}_l| +\sup_{x>X(t)} |u_r(x,t)-\overline{u}_r| + |X(t)-st|\\
\leq & \frac{C}{t}, \quad \forall~ t>T_S,
\end{align*}
where $C$ depends on $\overline{u}_l, \overline{u}_r, p, f.$\\
This completes the proof of Theorem \ref{lb}. \end{proof}
\begin{proof}[Proof of Theorem \ref{thmshock2}]
When $f(u)=\dfrac{u^2}{2}, $ by Galilean transformation, one has that
$$ u_l(x,t)=w(x-\overline{u}_l t,t)+\overline{u}_l,~ u_r(x,t)=w(x-\overline{u}_r t, t)+\overline{u}_r. $$
Then by \eqref{Xfor}, it holds that
\begin{align*}
X(t)-st &= \frac{-1}{\overline{u}_l-\overline{u}_r} \Big\{ \int_{\Gamma_l^{-N}(t)}^{X(t)}w(x-\overline{u}_l t,t)~dx
+\int_{X(t)}^{\Gamma_r^N(t)} w(x-\overline{u}_r t,t)~dx \Big\}\\
&= \frac{-1}{\overline{u}_l-\overline{u}_r} \Big\{ \int_{a-Np}^{X(t)-\overline{u}_l t} w(y,t)~dy + \int_{X(t)-\overline{u}_r t}^{a+Np} w(y,t)~dy \Big\}\\
&= \frac{1}{\overline{u}_l-\overline{u}_r}\int_{X(t)-\overline{u}_l t}^{X(t)-\overline{u}_r t} w(y,t)~dy
\end{align*}
If $ (\overline{u}_l-\overline{u}_r)t=np $ for any positive integer $ n, $ then $X(t)=st,$ which means that the perturbed shock $x=X(t) $ will coincide with the background shock $x=st $ after a period of time $\dfrac{p}{\overline{u}_l-\overline{u}_r}$. \end{proof}
\section{Proof of Theorem \ref{thmrare} and \ref{thmrare2}} In this section, we will prove Theorem \ref{thmrare} and \ref{thmrare2}.
\iffalse \begin{Lem}\label{lemrare}
There exists a finie time $T_R=\dfrac{p}{f'(\overline{u}_r)-f'(\overline{u}_l)}>0$, such that
\begin{equation}\label{xlxr}
X_{l+}(t)\leq\Gamma_l^0(t)<\Gamma_r^{-1}(t)\leq X_{r-}(t), \quad t>T_R.
\end{equation} \end{Lem} \begin{proof}[Proof of Lemma \ref{lemrare}
Since $\Gamma_r^{-1}(t)=f'(\overline{u}_r)t+a-p$ and $\Gamma_r^0(t)=f'(\overline{u}_r)t+a$ are two divides corresponding to $u_r(x,t)$ (see Corollary \ref{corgamma}), thus the characteristic $X_{r-}(t)$ cannot run out of the region between these two divides, that is
\begin{equation*}
\Gamma_r^{-1}(t) \leq X_{r-}(t) \leq \Gamma_r^0(t), \quad \forall~ t>0.
\end{equation*}
Similar way to verify
\begin{equation*}
\Gamma_l^{-1}(t) \leq X_{l+}(t) \leq \Gamma_l^0(t), \quad \forall~ t>0.
\end{equation*}
See Figure.\\
As $\overline{u}_l<\overline{u}_r$ and $f(u)$ is convex
, thus it's easy to prove \eqref{xlxr}). \end{proof} \fi
\begin{proof}[Proof of Theorem \ref{thmrare}]
Since $\Gamma_r^{-1}(t)=f'(\overline{u}_r)t+a-p$ and $\Gamma_r^0(t)=f'(\overline{u}_r)t+a$ are two divides corresponding to $u_r(x,t)$ (see Corollary \ref{corgamma}), thus the characteristic $X_{r-}(t)$ associated with $u_r$, which is defined in \eqref{dX}, cannot run out of the region between these two divides, that is
\begin{equation*}
\Gamma_r^{-1}(t) \leq X_{r-}(t) \leq \Gamma_r^0(t), \quad \forall~ t>0.
\end{equation*}
And similarly, it holds that
\begin{equation*}
\Gamma_l^{-1}(t) \leq X_{l+}(t) \leq \Gamma_l^0(t), \quad \forall~ t>0.
\end{equation*}
See Figure \ref{rarefig}.
\begin{figure}\label{rarefig}
\end{figure}
Now by Proposition \ref{propglue}, if $x<\Gamma_l^{-1}(t)$, then $u(x,t)=u_l(x,t)$; and if $x>\Gamma_r^0(t)$, then $u(x,t)=u_r(x,t)$. And if $\Gamma_l^{-1}(t)< x <\Gamma_r^0(t)$, the following claim holds.
\textbf{Claim:} For $\Gamma_l^{-1}(t)<x<\Gamma_r^0(t)$, the extremal backward characteristics associated with $u$ cannot intersect with $\Gamma_l^{-1}(t)$ nor $\Gamma_r^0(t)$ for $t>0$.
In fact, as $u(\Gamma_l^{-1}(t)-,t)=u_l(\Gamma_l^{-1}(t)-,t) \equiv \overline{u}_l$, thus by entropy condition, one has $$ u(\Gamma_l^{-1}(t)+,t) \leq \overline{u}_l \quad \text{for any } ~t>0, $$ so if there exists any point $ (\overline{x}, \overline{t}) $ between $\Gamma_l^{-1}$ and $\Gamma_r^0,$ such that the minimal backward characteristic of $u$ issuing from $ (\overline{x}, \overline{t}) $ intersects with $\Gamma_l^{-1}$ at a point $ (\Gamma_l^{-1}(\tau),\tau)$ with $\tau>0$, then its slope $f'(u(\Gamma_l^{-1}(\tau)+,\tau)) >f'(\overline{u}_l)$ which is the slope of $\Gamma_l^{-1}$ (see Proposition \ref{propgen}), then $u(\Gamma_l^{-1}(\tau)+,\tau)>\overline{u}_l$ by strict convexity of $f$, which turns out to be a contradiction. Similarly, one can show that maximal backward characteristic issuing from any point between $ \Gamma_l^{-1} $ and $ \Gamma_r^0 $ will not intersect with $\Gamma_r^0$.
Thus by the arguments above, one may conclude that for any fixed $t>0$,
\begin{enumerate}
\item[1)] If $x< \Gamma_l^{-1}(t)$, then combined with \eqref{lb2}, one has $|u(x,t)-u^R(x,t)|=|u_l(x,t)-\overline{u}_l|\leq \dfrac{C}{t}$.
\item[2)] If $\Gamma_l^{-1}(t)< x<f'(\overline{u}_l)t$, by Claim, one has
\begin{align*}
& \dfrac{x-a}{t}\leq f'(u(x,t))\leq\dfrac{x-(a-p)}{t}\\
\Longrightarrow \quad & \dfrac{a-p+f'(\overline{u}_l)t-a}{t}\leq f'(u(x,t))\leq\dfrac{f'(\overline{u}_l)t-(a-p)}{t}\\
\Longrightarrow \quad & -\dfrac{p}{t} \leq f'(u(x,t))-f'(\overline{u}_l)\leq \dfrac{p}{t}\\
\Longrightarrow \quad & |u(x,t)-\overline{u}_l| \leq \dfrac{C}{t}
\end{align*}
here $a\in[0,p)$ is used.
Therefore, $|u(x,t)-u^R(x,t)|=|u(x,t)-\overline{u}_l|\leq \dfrac{C}{t}$.
\item[3)] If $f'(\overline{u}_l)t\leq x< f'(\overline{u}_r)t$, then $ u^R(x,t)=(f')^{-1}(\dfrac{x}{t}) $, and similarly to 2), by Claim, one still has
\begin{align*}
& \dfrac{x-a}{t}\leq f'(u(x,t))\leq\dfrac{x-(a-p)}{t}\\
\Longrightarrow \quad & | f'(u)-f'(u^R) | \leq \dfrac{p}{t}\\
\Longrightarrow \quad & |u(x,t)-u^R(x,t)| \leq \dfrac{C}{t}
\end{align*}
\item[4)] The other cases are similar.
\end{enumerate} Therefore, the proof is finished. \end{proof}
\begin{proof}[Proof of Theorem \ref{thmrare2}]
Since for all $x\in \mathbb{R}$, $\int_0^x w_0(y)~dy \geq 0$, thus one can choose $a=0$ in Corollary \ref{corgamma} for this case. Hence, $\Gamma_l^0(t)=f'(\overline{u}_l)t\leq X_{l+}(t)$ and $X_{r-}(t)\leq \Gamma_r^0(t)=f'(\overline{u}_r)t$, for all $t\geq 0$. So if $x<f'(\overline{u}_l)t$, then $u(x,t)=u_l(x,t)$; if $x>f'(\overline{u}_r)t$, then $u(x,t)=u_r(x,t)$; and if $f'(\overline{u}_l)t\leq x\leq f'(\overline{u}_r)t$, by similar arguments below Figure \ref{rarefig}, the extremal backward characteristics emanating from $(x,t)$ cannot cross through $\Gamma_l^0$ or $\Gamma_r^0$ at positive time, and thus both of them have to intersect with the $x$-axis at the origin, and hence $f'(u(x,t))=\dfrac{x}{t}=f'(u^R(x,t))$. The proof is finished. \end{proof}
\section{Alternative proof of Theorem~\ref{thmshock} for Burger's equation}
In this section we present an alternative proof of \eqref{glue} in Theorem~\ref{thmshock} for the Burger's equation with initial data $u_0(x)$ in \eqref{ic1} with $ \overline{u}_l > \overline{u}_r, $ and $ w_0 $ satisfies \eqref{ave0}. This method depends on the Hopf's solution given in \cite{hopf}.
Denote the viscous solution $u^\varepsilon(x,t)$ to the viscous equation \begin{equation}\label{equ2}
\partial_t u^\varepsilon+\partial_x \Big( \frac{(u^\varepsilon)^2}{2} \Big)= \varepsilon \partial_x^2 u^\varepsilon, \quad x \in (-\infty, +\infty),\quad t>0, \end{equation} with initial data \eqref{ic1} $ u^\varepsilon(x,0)=u_0(x)$. \\ By Hopf-Cole transformation $u^\varepsilon$ can be computed in an explicit formula, \begin{equation}\label{hop}
u^\varepsilon(x,t)=\int_{-\infty}^{\infty} \frac{x-y}{t} e^{-F(t,x,y)/2\varepsilon} dy \Big{/} \int_{-\infty}^{\infty} e^{-F(t,x,y)/2\varepsilon} dy. \end{equation} where $F(t,x,y)\triangleq\frac{(x-y)^2}{2t}+\int_0^y u_0(z)dz.$ Let $\varepsilon \rightarrow 0, $ it is well known that $ u^\varepsilon(x,t) $ converges to the unique entropy solution $u(x,t)$ to \eqref{equ1}, \eqref{ic1} almost everywhere.\\
Before giving the proof, we need some notations.\\ Denote \begin{align}
& F_l(t,x,y) \triangleq \frac{(x-y)^2}{2t}+\int_0^y \overline{u}_l+w_0(z) \ dz;\label{fl}\\
& F_r(t,x,y) \triangleq \frac{(x-y)^2}{2t}+\int_0^y \overline{u}_r+w_0(z) \ dz;\label{fr}\\
& F(t,x,y) \triangleq \begin{cases}
F_l(t,x,y), \quad \text{~if } y \leq 0;\\
F_r(t,x,y), \quad \text{~if~ } y \geq 0.
\end{cases}\label{f}\\
& Y_{l*}(t,x) \triangleq \min_{z \in \mathbb{R}} \left\{ z: F_l(t,x,z)=\min_{y \in \R} F_l(t,x,y) \right\};\label{ylb}\\
& Y_l^*(t,x) \triangleq \max_{z \in \mathbb{R}} \left\{ z: F_l(t,x,z)=\min_{y \in \R} F_l(t,x,y) \right\};\label{ylt}\\
& Y_{r*}(t,x) \triangleq \min_{z \in \mathbb{R}} \left\{ z: F_r(t,x,z)=\min_{y \in \R} F_r(t,x,y) \right\};\label{yrb}\\
& Y_r^*(t,x) \triangleq \max_{z \in \mathbb{R}} \left\{ z: F_r(t,x,z)=\min_{y \in \R} F_r(t,x,y) \right\};\label{yrt}\\
& Y_*(t,x) \triangleq \min_{z \in \mathbb{R}} \left\{ z: F(t,x,z)=\min_{y \in \R} F(t,x,y) \right\};\label{yb}\\
& Y^*(t,x) \triangleq \max_{z \in \mathbb{R}} \left\{ z: F(t,x,z)=\min_{y \in \R} F(t,x,y) \right\};\label{yt}\\
& m_-(t,x) \triangleq \min_{y\leq 0} F_l(t,x,y) \quad \text{and} \quad m_+(t,x) \triangleq \min_{y\geq 0} F_r(t,x,y). \label{defm} \end{align} As in \cite{hopf}, one has the following lemma \begin{Lem}\label{lem1} The following properties hold:
\begin{enumerate}
\item[(i).] $ m_-(t,x) $ and $ m_+(t,x) $ are both continuous in $ t>0, x \in \mathbb{R}. $
\item[(ii).] $ Y_{l*}(t,x),~ Y_{r*}(t,x),~ Y_*(t,x)$ are increasing and continuous to the left with respect to $ x, $ for any $ t>0. $
\item[(iii).] $ Y_l^*(t,x),~ Y_r^*(t,x),~ Y^*(t,x) $ are increasing and continuous to the right with respect to $ x, $ for any $ t>0. $
\item[(iv).] If $ x_1<x_2, $ then
\begin{align*}
Y_l^*(t,x_1) \leq Y_{l*}(t,x_2),\quad
Y_r^*(t,x_1) \leq Y_{r*}(t,x_2),\quad
Y^*(t,x_1) \leq Y_*(t,x_2).
\end{align*}
\end{enumerate}
\end{Lem} \begin{proof}
(i) can be proved easily by the fact that $F_r(t,x,y), F_l(t,x,y) $ are both continuous in $t,x,y.$ And (ii), (iii), (iv) are derived from Lemma 1 in \cite{hopf}. \end{proof}
\begin{Prop}[Theorem 3 in \cite{hopf}]\label{prophopf}
under the assumptions of Theorem \ref{thmshock}, it holds that for almost all $x \in \mathbb{R}, t>0, $
\begin{align}
& u_l(x+,t) = \frac{x- Y_l^*(x,t)}{t}, ~ u_l(x-,t) = \frac{x- Y_{l*}(x,t)}{t},\\
& u_r(x+,t) = \frac{x- Y_r^*(x,t)}{t}, ~ u_r(x-,t) = \frac{x- Y_{r*}(x,t)}{t},\\
& u(x+,t) = \frac{x- Y^*(x,t)}{t}, ~ u(x-,t) = \frac{x- Y_*(x,t)}{t}.
\end{align} \end{Prop}
\begin{proof}[Proof of Theorem \ref{thmshock}]
Since $ w_0(x) $ is $ L^{\infty} $ bounded, there exist two constant numbers $ \alpha < \beta, $ such that
$$\alpha < w_0(x) <\beta, \quad \quad a.e.~ x.$$
Now we compare $$ m_-(t,x)=\min_{y\leq 0} F_l(t,x,y) \text{~~ with ~~} m_+(t,x)= \min_{y\geq 0} F_r(t,x,y). $$
\begin{enumerate}
\item[\textbf{Case1.}]
If $~\dfrac{x}{t} < s+\alpha, $ where $s=\dfrac{\overline{u}_l+\overline{u}_r}{2}. $
Then
\begin{align}
m_-(t,x)& \leq \min_{y\leq 0} \Big( \frac{(y-x)^2}{2t}+(\overline{u}_l+\alpha)y \Big) \label{fyl} \\
& = \min_{y\leq 0} \Big(\frac{1}{2t} \{y-[x-(\overline{u}_l+\alpha)t]\}^2+(\overline{u}_l+\alpha)x-\frac{(\overline{u}_l+\alpha)^2}{2}t \Big) \notag \\
& = (\overline{u}_l+\alpha)x-\frac{(\overline{u}_l+\alpha)^2}{2}t ; \notag\\
m_+(t,x) & \geq \min_{y\geq 0} \Big( \frac{(y-x)^2}{2t}+(\overline{u}_r+\alpha)y \Big) \label{fyr}\\
& = \min_{y\leq 0} \Big(\frac{1}{2t} \{y-[x-(\overline{u}_r+\alpha)t]\}^2+(\overline{u}_r+\alpha)x-\frac{(\overline{u}_r+\alpha)^2}{2}t \Big) \notag \\
& \geq (\overline{u}_r+\alpha)x-\frac{(\overline{u}_r+\alpha)^2}{2}t. \notag
\end{align}
Using \eqref{fyl} and \eqref{fyr}, one has
\begin{align*}
m_+(t,x) - m_-(t,x) & \geq (\overline{u}_r+\alpha)x-\frac{(\overline{u}_r+\alpha)^2}{2}t - (\overline{u}_l+\alpha)x + \frac{(\overline{u}_l+\alpha)^2}{2}t \\
& = (\overline{u}_l - \overline{u}_r) \big((s+\alpha)t - x\big) >0,
\end{align*}
so in this case,
\begin{equation}\label{neg}
m_+(t,x) > m_-(t,x).
\end{equation}
\item[\textbf{Case2.}]
If $~\dfrac{x}{t} >s+\beta,$ by similar argument as in Case 1, one can prove that
\begin{equation}\label{pos}
m_+(t,x) < m_-(t,x).
\end{equation}\\
\end{enumerate}
It then follows from \eqref{neg}, \eqref{pos}, and the continuity of $ m_{\pm}(t,x) $ that there must exist a closed nonempty set for each $t>0, $
\begin{equation}\label{set}
\mathfrak{X}(t)\triangleq\{ x: m_-(t,x)=m_+(t,x) \} \subset [~(s+\alpha) t, (s+\beta) t~]
\end{equation}
Define the minimum value and the maximum value in $\mathfrak{X}(t) $ as:
\begin{equation}\label{Xmm}
X_-(t) \triangleq \min\{x: x\in \mathfrak{X}(t)\}, \quad X_+(t) \triangleq \max\{x: x\in \mathfrak{X}(t)\}
\end{equation}
Since $\mathfrak{X}(t) $ is closed, then $X_{\pm}(t) \in \mathfrak{X}(t).$\\
Next, we prove some properties about $ \mathfrak{X}(t), X_-(t)$ and $ X_+(t). $
\begin{Lem}\label{lem2}
For any $ x \in \mathfrak{X}(t), $ it holds that
\begin{align}
& Y_*(t,x)=Y_{l*}(t,x) \leq 0,~ u(x-,t)=u_l(x-,t), \label{Xbn}\\
& Y^*(t,x)=Y_r^{*}(t,x) \geq 0,~ u(x+,t)=u_l(x+,t),\label{Xtp}
\end{align}
\end{Lem}
\begin{proof}[Proof of Lemma \ref{lem2}]
It follows from the definition of $\mathfrak{X}(t), $ that $$m_-(t,x)=m_+(t,x), $$
\begin{equation}\label{lre}
i.e.\quad \quad \min_{y\leq 0} F_l(t,x,y)=\min_{y\geq 0} F_r(t,x,y).
\end{equation}
This together with the definitions of $F(t,x,y) $ in \eqref{f} and $Y_*(t,x) $ in \eqref{yb}, implies that $$ Y_*(t,x) \leq 0.$$
Due to \eqref{fl}, \eqref{fr}, and $\overline{u}_r < \overline{u}_l, $ it's easy to verify that
$$ \min_{y\geq 0} F_l(t,x,y) \geq \min_{y\geq 0} F_r(t,x,y) $$
Then it holds that
\begin{equation}\label{inel}
\min_{y\geq 0} F_l(t,x,y) \geq \min_{y\leq 0} F_l(t,x,y)
\end{equation}
which implies that $$Y_{l*}(t,x) \leq 0. $$
From above, we have proved that the minimum values $$\min_{y \in \mathbb{R}} F(t,x,y) \quad \text{and} \quad \min_{y \in \mathbb{R}} F_l(t,x,y)$$ can be achieved in $\{y\leq 0\},$ where $ F(t,x,y) = F_l(t,x,y). $ So it follows that $ Y_*(t,x)=Y_{l*}(t,x), $ and by Proposition \ref{prophopf}, \eqref{Xbn} can be verified easily.\\
The proof of \eqref{Xtp} is similar.
\end{proof}
\begin{Lem}\label{lem3} The following properties hold:
\begin{enumerate}
\item[(i).] If $ x<X_-(t), $ then
\begin{align*}
& Y_*(t,x)=Y_{l*}(t,x), Y^*(t,x)=Y_l^*(t,x)<0,\\
\text{ and hence, } & u(x,t)=u_l(x,t).
\end{align*}
\item[(ii).] If $ x>X_+(t), $ then
\begin{align*}
& Y^*(t,x)=Y_r^*(t,x), Y_*(t,x)=Y_{r*}(t,x)>0,\\
\text{ and hence, } & u(x,t)=u_r(x,t).
\end{align*}
\item[(iii).] If $ X_-(t) <X_+(t), $ then $ \forall~ x \in (X_-(t), X_+(t)), $
\begin{align*}
& Y_{l*}(t,x)=Y_l^*(t,x) =Y_{r*}(t,x)=Y_r^*(t,x)=Y_*(t,x)=Y^*(t,x)=0,\\
\text{ and hence, }& u(x,t)=u_l (x,t)=u_r(x,t)=\frac{x}{t}.
\end{align*}
\end{enumerate}
\end{Lem}
\begin{proof}[Proof of Lemma \ref{lem3}] $ $\\
\begin{enumerate}
\item[(i).] It follows from Lemma \ref{lem2} that
$ Y_*(t,X_-(t))=Y_{l*}(t,X_-(t)) \leq 0. $
Thus by Lemma \ref{lem1}.(iv) that, if $x<X_-(t), $ then $ Y^*(t,x)\leq Y_*(t,X_-(t)) \leq 0. $ \\
If $ Y^*(t,x)=0, $ by the definition of $ Y^* $ in \eqref{yt}, it holds that
\begin{align*}
& \min_{y \in \mathbb{R} } F(t,x,y)= \min_{y \leq 0} F_l(t,x,y) = F_l(t,x,0), \\
\text{and~ }& \min_{y \in \mathbb{R}} F(t,x,y)<F_r(t,x,z), ~\forall~ z>0.
\end{align*}
Thus $$ \min_{y \leq 0} F_l(t,x,y) = F_l(t,x,0) = F_r(t,x,0) = \min_{y \geq 0} F_r(t,x,y),$$
which implies that $x \in \mathfrak{X}(t) $ defined by \eqref{set}. But $x<X_-(t) $ contradicts with the definition of $X_-(t)$ in \eqref{Xmm}, so it holds that
\begin{equation}\label{ineq1}
Y^*(t,x)<0.
\end{equation}
By \eqref{ineq1}, there must hold
$$ \min_{y \leq 0} F_l(t,x,y) < \min_{y \geq 0} F_r(t,x,y), $$
then by $$ \min_{y \geq 0} F_r(t,x,y) \leq \min_{y \geq 0} F_l(t,x,y), $$
one has
$$ \min_{y \leq 0} F_l(t,x,y) < \min_{y \geq 0} F_l(t,x,y), $$
so it holds that
\begin{equation}\label{ineq2}
Y_l^*(t,x)<0.
\end{equation}\\
By \eqref{ineq1} and \eqref{ineq2}, the minimum values of $ F(t,x,y) $ and $ F_l(t,x,y) $ can only be achieved in $ \{y<0\}, $ where $ F(t,x,y) = F_l(t,x,y), $ thus it holds that $$ Y_l^*(t,x)=Y^*(t,x),~ Y_{l*}(t,x)=Y_*(t,x). $$\\
\item[(ii).] The proof is similar to (i).\\
\item[(iii).] By Lemma \ref{lem2}, one has
\begin{align*}
& Y_*(t,X_+(t))=Y_{l*}(t,X_+(t)) \leq 0,\\
& Y^*(t,X_-(t))=Y_r^{*}(t,X_-(t)) \geq 0.
\end{align*}
Thus if $X_-(t)<x<X_+(t), $ then by Lemma \ref{lem1}.(iv),
\begin{equation*}
0\leq Y^*(t,X_-(t)) \leq Y_*(t,x) \leq Y^*(t,x) \leq Y_*(t,X_+(t)) \leq 0
\end{equation*}
which implies that
\begin{equation}\label{eq2}
Y_*(t,x)=Y^*(t,x)=0, \quad \forall~ x \in (X_-(t), X_+(t)).
\end{equation}
It implies that
$$\min_{y\in \mathbb{R}} F(t,x,y)=F(t,x,0), $$
and
\begin{equation*}
\begin{cases}
F_l(t,x,y)>F(t,x,0), \quad \forall~ y<0,\\
F_r(t,x,y)>F(t,x,0), \quad \forall~ y>0,
\end{cases}
\end{equation*}
which implies that
\begin{equation*}
\min_{y\leq 0} F_l(t,x,y) =F_l(t,x,0)=F_r(t,x,0)=\min_{y \geq 0} F_r(t,x,y) =\frac{x^2}{2t}.
\end{equation*}
Thus, $x \in \mathfrak{X}(t) $. \\
Then by Lemma \ref{lem2} again and by \eqref{eq2}, one has that $Y_{l*}(t,x) =Y_*(t,x)=0.$\\
Then one has
$$ 0=Y_{l*}(t,x) \leq Y_l^*(t,x) \leq Y_{l*}(t,X_+(t)) \leq 0, $$
so $ Y_l^*(t,x)=0.$\\
Similarly, one can also prove that $$ Y_r^*(t,x)=Y_{r*}(t,x)=0, \text{~for ~} X_-(t)<x<X_+(t). $$
Due to Proposition \ref{prophopf}, the rest of the Lemma follows easily, which completes the proof.
\end{enumerate}
\end{proof}
To prove that after a finite time $ T_S $, $ X_-(t) = X_+(t). $ it's easy by using Lemma \ref{lem3} (iii) and \eqref{lb2}.
Next, we prove that when $t>T_S,$ the unique point $X(t) $ in $ \mathfrak{X}(t) $ is Lipschitz with respect to $ t. $\\
In fact, it is well-known that $u(x,t) \in Lip~((0,+\infty),L^1_{loc}). $ Thus there exists a positive constant $ C $ such that for any $t>\tau>T_S, $ it holds that
\begin{equation}\label{intlip}
\int_{X(t)}^{X(\tau)} |u(x,t)-u(x,\tau)| ~dx \leq C|t-\tau|,
\end{equation}
here one has assumed that $X(\tau)>X(t) $ without loss of generality.
When $X(t)<x<X(\tau), $ by Lemma \ref{lem3},
$$ u(x,t)=u_r(x,t),\quad u(x,\tau) = u_l(x,\tau). $$
Then \eqref{intlip} yields
$$ C|t-\tau| \geq \int_{X(t)}^{X(\tau)} (u_l(x,\tau)-u_r(x,t))~dx \geq \Big(\overline{u}_l-\overline{u}_r-\dfrac{2C}{T_S}\Big)~|X(t)-X(\tau)|, $$ i.e.
\begin{equation}\label{Xlip}
|X(t)-X(\tau)|\leq C(p,\overline{u}_l,\overline{u}_r)|t-\tau|, \quad \forall~ t>\tau>T_S.
\end{equation}
Since for $t>T_S$ large enough, $X(t) $ is Lipschitz and it is a discontinuous curve of the entropy solution $ u $, so $X(t) $ is actually a shock when $t>T_S. $ \\
Then by Lemma \ref{lem2} and Lemma \ref{lem3}.(i), (ii), one can finish the proof of \eqref{glue} . \end{proof}
\appendix \section{} For any $L^{\infty}$ periodic initial data $u_0$ with the average $\overline{u}$ defined as \eqref{avg}, before showing the theorem of the optimal decay of the corresponding entropy solution $u$, we define two functions $g$ and $z(t)$ associated with $f$ and $\overline{u}$ as follows:
By changing variables if necessary, we can assume without loss of generality that $$f(\overline{u})= f'(\overline{u})=0.$$ Since $f$ is strictly convex, $f'$ is monotonically increasing, so one can define \begin{equation}\label{dg}
g(v)\triangleq \int_0^v [(f')^{-1}(s)-\overline{u}]~ds, \end{equation} here $(f')^{-1}$ represents the inverse function of $f'$. Therefore, it follows that $g\in C^2 $ and satisfies \begin{align*}
& g'(v)=(f')^{-1}(v)-\overline{u}, \quad g(0)=g'(0)=0,\\
& g''(v)=1/f''\Big((f')^{-1}(v)\Big)>0 \end{align*} which implies that \begin{equation}\label{ineq3}
g(0)-g(-\frac{p}{t})<0, \quad g(\frac{p}{t})-g(0)>0. \end{equation}
While for any fixed $t>0$, $g(\dfrac{z}{t})-g(\dfrac{z-p}{t})$ is strictly increasing with respect to $z$, then by \eqref{ineq3}, there exists a unique point $z(t) \in (0,p)$ such that \begin{equation}\label{dz} g(\frac{z(t)}{t})=g(\frac{z(t)-p}{t}) \end{equation} And by implicit function theorem, $z(t) \in C^2((0,+\infty))$.
\begin{Thm} \label{thmper}
For any periodic initial data $u_0(x)\in L^{\infty}(\mathbb{R})$ with period $p$, the entropy solution $u(x,t)$ to \eqref{equ1} is also space-periodic of period $p$ for any $t>0$, and it satisfies
\begin{equation}\label{lb1}
(f')^{-1}\Big(\frac{z(t)-p}{t}\Big)\leq u(x,t) \leq (f')^{-1}\Big(\frac{z(t)}{t}\Big),~~\forall~ t>0, ~a.e.~x,
\end{equation}
where $\overline{u}$ is defined as \eqref{avg}, $(f')^{-1}$ is the inverse function of $f'$ and $z(t) \in (0,p)$ is defined as \eqref{dz}.
More precisely, by the definition of $z(t)$, \eqref{lb1} can imply that
\begin{equation}\label{lb11}
\begin{aligned}
& z(t)=\frac{p}{2}+o(1), \quad \text{as~} t \rightarrow +\infty,\\
& |u(x,t)-\overline{u}|\leq \frac{p}{2f''(\overline{u})t}+o(\frac{1}{t}), \quad \text{as~} t \rightarrow +\infty.
\end{aligned}
\end{equation}
Furthermore, there exist periodic initial data such that for any $t$ larger than a constant $T_P>0$,
\begin{equation}\label{lb12}
\begin{aligned}
&\inf_{x\in \mathbb{R}} u(x,t) = (f')^{-1}(\frac{z(t)-p}{t}),\\
&\sup_{x\in \mathbb{R}} u(x,t) = (f')^{-1}(\frac{z(t)}{t}).
\end{aligned}
\end{equation} \end{Thm}
By a translation of $x$-axis, one can also assume in \eqref{defa} that the integral minimal point $a=0$, since the equation \eqref{equ1} is invariant under this translation. Thus by Proposition \ref{propdiper} and $f'(\overline{u})=0 $, it holds that for $N \in \mathbb{Z}$, the straight lines $x=Np$ are all divides of the periodic entropy solution $u(x,t)$.
Then since $u(x,t)$ is space-periodic of period $p$ at any time, we can just focus on the region between two divides $x=0$ and $x=p$.
\begin{proof}[Proof of Theorem \ref{thmper}]
\textbf{Step1. } We first prove that the bounds for $u(x,t)$ in \eqref{lb1} can be attained, i.e. there exists an initial data $u_0(x)$ with the average $\overline{u}$, s.t. \eqref{lb12} holds.
Let $m_1, m_2>0$ be any constants and define $u_0(x)$ in one period $(0,p)$ as \begin{equation}\label{2constants}
u_0(x)=\begin{cases}
m_1+\overline{u} & \text{if}~0<x<\frac{m_2}{m_1+m_2}p,\\
-m_2+\overline{u} & \text{if}~ \frac{m_2}{m_1+m_2}p<x<p.
\end{cases}
\end{equation}
This function is piecewise constant and the average is $\overline{u}$. Thus by the assumption $f'(\overline{u})=0, f''>0$, we have $f'(-m_2+\overline{u})<0<f'(m_1+\overline{u})$. Thus it is easy to verify that the forward generalized characteristic issuing from the point $(\frac{m_2}{m_1+m_2}p,0)$ of $x$-axis is unique, denoted by $x=\zeta(t)$, which is a shock for short time.
\textbf{Claim}: After a finite time
$$T_P\triangleq \max\{\frac{p}{f'(m_1+\overline{u})},\frac{p}{-f'(-m_2+\overline{u})}\},$$
the minimal (resp.~ maximal) backward characteristic emanating from $(\zeta(t),t)$ intersects with $x$-axis at the origin (resp.~ $(p,0)$).
See Figure \ref{2rarefig}.
\begin{figure}\label{2rarefig}
\end{figure}
Indeed, for fixed $\overline{t}>T_P$, emanating from $(\zeta(\overline{t}), \overline{t}),$ the minimal backward characteristic $\xi_-(t)$ of $u$
$$\xi_-(t)\triangleq f'\Big(u(\zeta(\overline{t})-, \overline{t})\Big)(t-\overline{t})+\zeta(\overline{t}) $$
cannot intersect with the divide $x=0$ at $t>0$, and also $ \xi_-(t) \leq \zeta(t) $ since $ \xi_-(t) $ is the minimal generalized characteristic, thus we have $0 \leq \xi_-(0)\leq \zeta(0)=\dfrac{m_2 p}{m_1+m_2} $ and $f'(u(\zeta(\overline{t})-, \overline{t}))\leq p/\overline{t}. $ Then by \eqref{genpro}, it holds that
\begin{equation}\label{exine}
u_0(\xi_-(0)-)\leq u(\zeta(\overline{t})-,\overline{t}) \leq (f')^{-1}(\frac{p}{\overline{t}})<(f')^{-1}(\frac{p}{T_P})\leq m_1+\overline{u}.
\end{equation}
If $0<\xi_-(0)\leq \dfrac{m_2 p}{m_1+m_2}$, then by the definition of $u_0$ in \eqref{2constants}, $u_0(\xi_-(0)-)=m_1+\overline{u}$, which contradicts with \eqref{exine}. So $\xi_-(0)=0 $ must hold.
The arguments for the maximal backward characteristic is similar, which proves the Claim.
For $t>T_p$, similar to the proof of \eqref{exine}, one can verify also that
\begin{equation}\label{2rare}
\begin{aligned}
& \text{~if~ } 0<x<\zeta(t),\quad u(x-,t)<m_1+\overline{u}\\
& \text{~if~ } \zeta(t)<x<p,\quad u(x+,t)>-m_2+\overline{u}.
\end{aligned}
\end{equation}
Thus \eqref{2rare} implies that for $t>T_p$, if $0<x<\zeta(t)$, the backward generalized characteristic emanating from $(x,t)$ can only intersect with $x$-axis at origin, which means that the backward generalized characteristic (a real characteristic) is unique, so $u(x,t)=(f')^{-1}(\frac{x}{t})$; and respectively, if $\zeta(t)<x<p$, the characteristic can only intersect with $x$-axis at $(p,0)$, so $u(x,t)=(f')^{-1}(\frac{x-p}{t}) $. See Figure \ref{2rarefig}.
Then it follows
\begin{equation*}
\begin{aligned}
0&=\int_0^p [u(y,t)-\overline{u}] \ dy\\
&=\int_0^{\zeta(t)} [(f')^{-1}(\frac{y}{t})-\overline{u}] \ dy+\int_{\zeta(t)}^p [(f')^{-1}(\frac{y-p}{t})-\overline{u}] \ dy\\
&=\Big[g(\frac{\zeta(t)}{t})-g(\frac{\zeta(t)-p}{t})\Big]t.
\end{aligned}
\end{equation*}
Therefore, after $t>T_P$, $\zeta(t)=z(t) \in (0,p)$, and $ u(z(t)-,t)=(f')^{-1}(\frac{z(t)}{t})$ and $ u(z(t)+,t)=(f')^{-1}(\frac{z(t)-p}{t})$, which achieves \eqref{lb12}.
\textbf{Step2. } We prove \eqref{lb1} by a contradiction argument.
\begin{enumerate}
\item[1)] Suppose that there exist $x\in(0,p)$ and $t>0$ such that $u(x-,t)>(f')^{-1}(\frac{z(t)}{t})$ and the minimal backward characteristic emanating from $(x,t)$ is $\xi(\tau), \tau\in[0,t]$.
Denote $\lambda\triangleq \xi(0)$ and $\mu\triangleq x-\lambda-z(t)$, and thus $u(x-,t)=(f')^{-1}(\frac{x-\lambda}{t})$. $u(x-,t)>(f')^{-1}(\frac{z(t)}{t})$ implies $\mu>0$, while $x\in(0,p)$ implies $0\leq \lambda < p-z(t)-\mu$.
Note that when $0<y<x$, the maximal backward characteristics emanating from $(y,t)$ cannot cross $\xi(\tau)$, thus $u(y+,t)\geq (f')^{-1}(\frac{y-\lambda}{t})$;
when $x<y<p$, the maximal backward characteristics emanating from $(y,t)$ cannot cross $x=p$, thus $u(y+,t) \geq (f')^{-1}(\frac{y-p}{t})$.
Therefore, one has
\begin{equation}\label{0geq}
\begin{aligned}
0&=\int_0^x [u(y,t)-\overline{u}] \ dy+\int_x^p [u(y,t)-\overline{u}] \ dy\\
&\geq\int_0^x [(f')^{-1}(\frac{y-\lambda}{t})-\overline{u}] \ dy+\int_x^p [(f')^{-1}(\frac{y-p}{t})-\overline{u}] \ dy\\
&=\Big[g(\frac{x-\lambda}{t})-g(-\frac{\lambda}{t})-g(\frac{x-p}{t})\Big]t\\
&=\Big[g(\frac{z(t)+\mu}{t})-g(-\frac{\lambda}{t})-g(\frac{z(t)+\mu-p}{t}+\frac{\lambda}{t})\Big]t.
\end{aligned}
\end{equation}
As $g$ is convex and $p-z(t)-\mu>\lambda \geq 0$, one has
\begin{equation}\label{2ineq}
\begin{aligned}
g(-\frac{\lambda}{t})
&\leq \frac{\lambda}{p-z(t)-\mu}~g(\frac{z(t)+\mu-p}{t})+\Big(1-\frac{\lambda}{p-z(t)+\mu}\Big)~g(0)\\
&=\frac{\lambda}{p-z(t)-\mu}~g(\frac{z(t)+\mu-p}{t})\\
g(\frac{z(t)+\mu-p}{t}+\frac{\lambda}{t})
&\leq \Big(1-\frac{\lambda}{p-z(t)-\mu}\Big)~g(\frac{z(t)+\mu-p}{t})+\frac{\lambda}{p-z(t)+\mu}~g(0)\\
&=\Big(1-\frac{\lambda}{p-z(t)-\mu}\Big)~g(\frac{z(t)+\mu-p}{t})
\end{aligned}
\end{equation}
Therefore, taking \eqref{2ineq} into \eqref{0geq} leads to
\begin{equation*}
g(\frac{z(t)+\mu}{t})-g(\frac{z(t)+\mu-p}{t}) \leq 0.
\end{equation*}
But by the definition of $z(t)$ in \eqref{dz} and $g(\frac{z}{t})-g(\frac{z-p}{t})$ is strictly increasing with respect to $z$, this is a contradiction with $\mu>0$. \\
Hence for any $ x \in (0,p),~ t>0, $ it holds that
$$ u(x-,t) \leq (f')^{-1}(\frac{z(t)}{t}). $$
\item[2)] Suppose that there exist $x\in(0,p)$ and $t>0$ such that $u(x+,t)<(f')^{-1}(\frac{z(t)-p}{t})$ and the maximal backward characteristic emanating from $(x,t)$ is $\xi(\tau), \tau\in[0,t]$.
Denote $\lambda\triangleq \xi(0)$ and $-\mu\triangleq x-\lambda-z(t)+p$, and thus $u(x+,t)=(f')^{-1}(\frac{x-\lambda}{t})$. $u(x+,t)<(f')^{-1}(\frac{z(t)-p}{t})$ implies $\mu>0$, while $x\in(0,p)$ implies $0\leq p- \lambda < z(t)-\mu$.
Note that when $0<y<x$, the maximal backward characteristics emanating from $(y,t)$ cannot cross $x=0$, thus $u(y+,t)\leq (f')^{-1}(\frac{y}{t})$;
when $x<y<p$, the maximal backward characteristics emanating from $(y,t)$ cannot cross $\xi(\tau)$, thus $u(y+,t)\leq (f')^{-1}(\frac{y-\lambda}{t})$.
Therefore, in the similar way as in 1), one can obtain
\begin{equation*}
\begin{aligned}
0&=\int_0^x [u(y,t)-\overline{u}] \ dy+\int_x^p [u(y,t)-\overline{u}] \ dy\\
&\leq\int_0^x [(f')^{-1}(\frac{y}{t})-\overline{u}] \ dy+\int_x^p [(f')^{-1}(\frac{y-\lambda}{t})-\overline{u}] \ dy\\
&=\Big[g(\frac{x}{t})+g(\frac{p-\lambda}{t})-g(\frac{x-\lambda}{t})\Big]t\\
&=\Big[g(\frac{z(t)-\mu}{t}-\frac{p-\lambda}{t})+g(\frac{p-\lambda}{t})-g(\frac{x-\lambda}{t})\Big]t\\
&\leq \Big[g(\frac{z(t)-\mu}{t})+g(0)-g(\frac{x-\lambda}{t})\Big]t \quad \quad \quad (\text{similar to proof of \eqref{2ineq}}) \\
&= \Big[g(\frac{z(t)-\mu}{t})-g(\frac{z(t)-p-\mu}{t})\Big]t <0, \quad \quad \quad (\mu>0)
\end{aligned}
\end{equation*}
which is also a contradiction.\\
Hence for any $ x \in (0,p),~ t>0, $ it holds that
$$ u(x+,t) \geq (f')^{-1}(\frac{z(t)-p}{t}). $$
\end{enumerate}
Combining 1), 2) and using the entropy condition $ u(x-,t) \geq u(x+,t), $ one can prove \eqref{lb1}.
\textbf{Step3. } By \eqref{dz} and Taylor expansion, one has
\begin{align*}
& \frac{1}{2}g''(0)\Big[\frac{z(t)}{t}\Big]^2 = \frac{1}{2}g''(0)\Big[\frac{z(t)-p}{t}\Big]^2+o(\frac{1}{t^2}) \qquad \text{as}\ t\rightarrow +\infty,\\
\Rightarrow~~ & z(t)=\frac{p}{2}+o(1) \qquad \text{as}\ t\rightarrow +\infty.
\end{align*}
thus
\begin{equation*}
\begin{aligned}
(f')^{-1}(\frac{z(t)}{t})&=(f')^{-1}(0)+\frac{1}{f''\Big((f')^{-1}(0)\Big)}\frac{z(t)}{t}+o(\frac{1}{t})\\
&=\overline{u}+\frac{p}{2f''(\overline{u})t}+o(\frac{1}{t})
\qquad \text{as}\ t\rightarrow +\infty.
\end{aligned}
\end{equation*}
The estimates of $(f')^{-1}(\frac{z(t)-p}{t})$ is similar. So \eqref{lb11} is proved.
Combing Step 1-3, one can finish the proof of Theorem \ref{thmper}. \end{proof}
\end{document} | arXiv |
\begin{definition}[Definition:Singular Cardinal]
Let $\kappa$ be an infinite cardinal.
Then $\kappa$ is a '''singular cardinal''' {{iff}} $\operatorname{cf} \left({\kappa}\right) < \kappa$.
That is, the cofinality of $\kappa$ is less than itself.
\end{definition} | ProofWiki |
\begin{document}
\begin{abstract} This article deals with nonwandering (e.g.\ area-preserving) homeomorphisms of the torus $\T^2$ which are homotopic to the identity and strictly toral, in the sense that they exhibit dynamical properties that are not present in homeomorphisms of the annulus or the plane. This includes all homeomorphisms which have a rotation set with nonempty interior. We define two types of points: inessential and essential. The set of inessential points $\operatorname{Ine}(f)$ is shown to be a disjoint union of periodic topological disks (``elliptic islands''), while the set of essential points $\operatorname{Ess}(f)$ is an essential continuum, with typically rich dynamics (the ``chaotic region''). This generalizes and improves a similar description by J\"ager. The key result is boundedness of these ``elliptic islands'', which allows, among other things, to obtain sharp (uniform) bounds of the diffusion rates. We also show that the dynamics in $\operatorname{Ess}(f)$ is as rich as in $\T^2$ from the rotational viewpoint, and we obtain results relating the existence of large invariant topological disks to the abundance of fixed points. \end{abstract}
\title{Strictly toral dynamics}
\setcounter{tocdepth}{1} \tableofcontents \section*{Introduction} The purpose of this article is to study homeomorphisms of the torus $\T^2$ homotopic to the identity which exhibit dynamical properties that are intrinsic to the torus, in the sense that they cannot be present in a homeomorphism of the annulus or the plane. We call such homeomorphisms \emph{strictly toral} (a precise definition is given after the statement of Theorem \ref{th:essine}), and they include the homeomorphisms which have a rotation set with nonempty interior (in the sense of Misiurewicz and Ziemian \cite{m-z}). We will give a description of the dynamics of such maps in terms of ``elliptic islands'' and a ``chaotic region'' which generalizes the one given by J\"ager \cite{jager-elliptic}, and most importantly, we prove the boundedness of elliptic islands. This allows to obtain sharp bounds of the diffusion rates in the chaotic region and has a number of applications.
To be precise with our terminology, let us make a definition. Let $\pi\colon \R^2\to \T^2= \R^2/\Z^2$ be the universal covering. The homeomorphism $f\colon \T^2\to \T^2$ is \emph{annular} if there is some lift $\widehat{f}\colon \R^2\to \R^2$ of $f$ such that the deviations in the direction of some nonzero $v\in \Z^2$ are uniformly bounded: $$-M\leq \langle \widehat{f}^n(x)-x, v\rangle \leq M\quad \text{for all $x\in \R^2$ and $n\in \Z$}.$$
If $f$ is annular, it is easy to see that there is a finite covering of $\T^2$ such that the lift of $f$ to this covering has an invariant annular set (see for example \cite[Remark 3.10]{jager-linearization}), so that in some sense the dynamics of $f$ in a finite covering is embedded in an annulus. Therefore, in order to be strictly toral, a map $f$ must not be annular, and it seems reasonable to require that no positive power of $f$ be annular as well. However, this is not sufficient to qualify as strictly toral: in \cite{kt-irrotational}, an example is given of a homeomorphism $f$ isotopic to the identity such that no power of $f$ is annular, but $\fix(f)$ is \emph{fully essential}. This means that $\fix(f)$ contains the complement of some disjoint union of open topological disks (in the case of our example, just one disk). Such dynamics does not deserve to be called strictly toral: after removing the fixed points, what remains is dynamics that takes place on the plane. We mention however that a lift to $\R^2$ of such example has a trivial rotation set $\{(0,0)\}$ but has unbounded orbits in all directions.
The boundedness properties of the dynamics of lifts to the universal covering has been the subject of many recent works \cite{jager-bmm, jager-elliptic, jager-linearization, kk-spreading, davalos}, especially in the context of pseudo-rotations and in the area-preserving setting, or under aperiodicity conditions. In particular, using the notion of annular homeomorphism introduced here, saying that some power of $f$ is annular is equivalent to saying that $f$ is \emph{rationally bounded} in the sense of \cite{jager-linearization}.
We also need to introduce the notion of \emph{essential} points, which plays a central role in this article. A point $x\in \T^2$ is essential for $f$ if the orbit of every neighborhood of $x$ is an essential subset of $\T^2$ (see \S\ref{sec:essential-set}). Roughly speaking, this says that $x$ exhibits a weak form of ``rotational recurrence''. The set of essential points of $f$ is denoted by $\operatorname{Ess}(f)$, and the set of inessential points is $\operatorname{Ine}(f)=\T^2\setminus \operatorname{Ess}(f)$. Both sets are invariant, and $\operatorname{Ine}(f)$ is open. We restrict our attention to nonwandering homeomorphisms (this includes, for instance, the area-preserving homeomorphisms). Recall that $f$ is nonwandering if any open set intersects some forward iterate of itself. In that case, it is easy to see that inessential points are precisely the points that belong to some periodic open topological disk in $\T^2$ (see \S\ref{sec:essential}). Note that this does not necessarily mean that $\operatorname{Ine}(f)$ is a disjoint union of periodic topological disks, since there may be overlapping (for instance, $\operatorname{Ine}(f)$ could be the whole torus, as is the case with the identity map). Our main theorem implies that in the strictly toral case, $\operatorname{Ine}(f)$ is indeed an inessential set, in fact a union of periodic ``bounded'' disks.
In order to state our first theorem, let us give some additional definitions. If $U\subset \T^2$ is an open topological disk, then $\mathcal{D}(U)$ denotes the diameter of any connected component of $\pi^{-1}(U)$, and if $\mathcal{D}(U)<\infty$ we say that $U$ is bounded (see \S\ref{sec:essential-set} for more details). We say that $f$ is irrotational if some lift of $f$ to $\R^2$ has a rotation set equal to $\{(0,0)\}$.
\begin{theoremain}\label{th:essine} Let $f\colon \T^2\to \T^2$ be a nonwandering homeomorphism homotopic to the identity. Then one of the following holds: \begin{itemize} \item[(1)] There is $k\in \N$ such that $\fix(f^k)$ is fully essential, and $f^k$ is irrotational; \item[(2)] There is $k\in \N$ such that $f^k$ is annular; or \item[(3)] $\operatorname{Ess}(f)$ is nonempty, connected, and fully essential, and $\operatorname{Ine}(f)$ is the union of a family $\mathcal{U}$ of pairwise disjoint open disks such that for each $U\in \mathcal{U}$, $\mathcal{D}(U)$ is bounded by a constant that depends only on the period of $U$. \end{itemize} \end{theoremain}
Note that case (1) is very restrictive, as it means that the complement of $\fix(f^k)$ is inessential. One can think of this as a \emph{planar} case; i.e.\ the dynamics of $f^k$ can be seen entirely by looking at a homeomorphism of the plane (since $\T^2\setminus \fix(f)$ can be embedded in the plane). We emphasize that case (1) does not always imply case (2), as shown by the example in \cite{kt-irrotational}. Henceforth, by \emph{strictly toral} nonwandering homeomorphism we will mean one in which neither case (1) or (2) above holds.
Thus, if $f$ is nonwandering and strictly toral, there is a decomposition of the dynamics into a union $\operatorname{Ine}(f)$ (possibly empty) of periodic bounded discs which can be regarded as ``elliptic islands'', and a fully essential set $\operatorname{Ess}(f)$ which carries the ``rotational'' part of the dynamics. Figure \ref{fig:zaslavsky} shows an example where both sets are nonempty.
It is worth mentioning that the nonwandering hypothesis in Theorem \ref{th:essine} (and also in Theorem \ref{th:bdfix} below) is essential. Indeed, if $f$ is a homeomorphism of $\T^2$ obtained as the time-one map of the suspension flow of a Denjoy example in the circle, then $f^k$ is non-annular and has no fixed points, for any $k\in \N$, but there is an unbounded invariant disk (corresponding to the suspension of the wandering interval).
The main difficulty to prove Theorem \ref{th:essine} is to show that if $f$ is strictly toral, there are no unbounded periodic disks. This is possible thanks to the following theorem, which is a key result of this article.
\begin{theoremain}\label{th:bdfix} If $f\colon \T^2\to \T^2$ is a nonwandering non-annular homeomorphism homotopic to the identity then one and only one of the following properties hold: \begin{itemize} \item[(1)] There exists a constant $M$ such that each $f$-invariant open topological disk $U$ satisfies $\mathcal{D}(U)<M$; or \item[(2)] $\fix(f)$ is fully essential and $f$ is irrotational. \end{itemize} \end{theoremain}
\subsection*{Applications}
If $f\colon \T^2\to \T^2$ is a homeomorphism homotopic to the identity and $\widehat{f}\colon \R^2\to \R^2$ is a lift of $f$, then given an open set $U\subset \T^2$ we may define (as in \cite{jager-elliptic}) the local rotation set on $U$ as the set $\rho(\widehat{f}, U)\subset \R^2$ consisting of all possible limits of sequences of the form $(\widehat{f}^n(z_i)-z_i)/n_i$, where $\pi(z_i)\in U$ $n_i\to \infty$ as $i\to \infty$.
Observe that in particular $\rho(\widehat{f})=\rho(\widehat{f},\T^2)$ is the classic rotation set of $\widehat{f}$ as defined in \cite{m-z}. If $\rho(\widehat{f})$ has nonempty interior, this provides a great deal of global information about $f$; for instance there is positive entropy \cite{llibre-mackay}, abundance of periodic orbits \cite{franks-reali} and ergodic measures with all kinds of rotation vectors \cite{m-z2}.
Assume that $\rho(\widehat{f})$ has nonempty interior (and therefore $f$ is strictly toral). We may define the \emph{diffusion rate} $\eta(\widehat{f},U)$ on an open disk $U$ as the inner radius of the convex hull of $\rho(\widehat{f},U)$ (which does not depend on the lift). Roughly speaking, this measures the minimum linear rate of growth of $U$ in all homological directions. In \cite{jager-elliptic}, a set $\mathcal{C}(f)$ is defined consisting of all points $x\in \T^2$ such that every neighborhood of $x$ has positive diffusion rate. This implies that $\mathcal{C}(f)$ has (external) sensitive dependence on initial condition, which is why it is regarded as the ``chaotic'' region. It is also shown in \cite{jager-linearization} that every point of the set $\mathcal{E}(f)=\T^2\setminus\mathcal{C}(f)$ belongs to some periodic topological disk $U$, and $\rho(\widehat{f},U)$ is a single point (however, we will not use these facts, as they are a consequence of Theorem \ref{th:chaotic}).
In smooth area-preserving systems, KAM theory implies that periodic disks frequently appear, even in a persistent way, near elliptic periodic points \cite{moser-kam}. Maximal periodic disks are hence commonly referred to as \emph{elliptic islands}; these are the components of $\mathcal{E}(f)$. There are well documented examples in the physics literature which exhibit a pattern of elliptic islands surrounded by a complementary ``chaotic'' region with rich dynamical properties, often called the \emph{instability zone}. The most well known example is probably the Chirikov-Taylor standard map \cite{x_1}. Another example, which falls under our hypotheses, is the \emph{Zaslavsky web map} given by $f(x,y)=M^4(x,y)$, where $M(x,y) = (y, -x-K\sin(2\pi y-c))$ (see Figure \ref{fig:zaslavsky} for its phase portrait when $K=0.19,\, c=1.69$). For such map, properties of the instability region like width and rates of diffusion are better understood (\cite{x_3, pekarsky}), but at present no general theory is known. Similar dynamics also appear in other models of physical relevance (see \cite{harper}).
\begin{figure}
\caption{Phase portrait for the Zaslavsky web map}
\label{fig:zaslavsky}
\end{figure}
The next result provides a more precise description of the chaotic region in the general setting: it says that $\mathcal{C}(f)$ concentrates the interesting (from the rotational viewpoint) dynamics, it gives topological information about $\mathcal{C}(f)$ (namely, it is a fully essential continuum), and it shows that there is \emph{uniform diffusion rate} in the chaotic region. The latter means that is there is a constant $\eta_0>0$ depending only on $f$ such that whenever $x\in \mathcal{C}(f)$ and $U$ is a neighborhood of $x$, one has $\eta(f,U)=\eta_0$.
Let us say that an invariant set $\Lambda\subset \T^2$ is \emph{externally transitive} if for any pair of open sets $U$, $V$ intersecting $\Lambda$ there is $n\in \Z$ such that $f^n(U)\cap V\neq \emptyset$, and $\Lambda$ is \emph{externally sensitive on initial conditions} if there is $c>0$ such that for any $x\in \Lambda$ and any neighborhood $U$ of $x$ there is $n\in\N$ such that $\diam(f^n(U))>c$.
\begin{theoremain}\label{th:chaotic} Let $f\colon \T^2\to \T^2$ is a nonwandering homeomorphism homotopic to the identity and $\widehat{f}$ a lift of $f$ to $\R^2$. Suppose that $\rho(\widehat{f})$ has nonempty interior. Then, \begin{enumerate} \item[(1)] $\mathcal{C}(f)=\operatorname{Ess}(f)$, which is a fully essential continuum and $\mathcal{E}(f)=\operatorname{Ine}(f)$ is a disjoint union of periodic bounded disks; \item[(2)] $\operatorname{Ess}(f)$ is externally transitive and sensitive on initial conditions; \item[(3)] For any $x\in \operatorname{Ess}(f)$ and any neighborhood $U$ of $x$, $\conv(\rho(\widehat{f},U))=\rho(\widehat{f})$; \end{enumerate} \end{theoremain}
Another result that reflects how $\operatorname{Ess}(f)$ carries rich dynamics is related to the realization of rotation vectors by measures or periodic orbits (see \S\ref{sec:rotation} for definitions). It is known that every extremal or interior point $v$ of $\rho(f)$ is realized by an ergodic measure \cite{m-z,m-z2}, and if $v\in \Q^2$ then it is realized by a periodic point \cite{franks-reali, franks-reali2, m-z} (see also \cite{franks-reali3}). The next theorem guarantees that one can obtain such type of realization \emph{in the set of essential points}.
\begin{theoremain}\label{th:reali} Suppose $f\colon \T^2\to \T^2$ is nonwandering, homotopic to the identity and strictly toral, and let $\widehat{f}$ be a lift of $f$ to $\R^2$. Then, \begin{enumerate} \item[(1)] Any rational vector of $\rho(\widehat{f})$ that is realized by some periodic point is also realized by a periodic point in $\operatorname{Ess}(f)$). \item[(2)] If $\mu$ is an ergodic Borel probability measure $\mu$ with associated rotation vector $\rho_\mu(\widehat{f})\notin \Q^2$, then $\mu$ is supported on $\operatorname{Ess}(f)$. \end{enumerate} \end{theoremain}
Finally, a simple application of Theorem \ref{th:essine} gives a characterization of the obstruction to transitivity for strictly toral maps:
\begin{corollarymain}\label{coro:trans} Let $f\colon \T^2\to \T^2$ be a nonwandering homeomorphism homotopic to the identity, and assume that $f$ is strictly toral. Then $f$ is transitive if and only if there are no bounded periodic disks. \end{corollarymain}
\subsection*{Questions}
If $\rho(\widehat{f}$) has nonempty interior or is a single totally irrational vector, it is easy to conclude that $f$ is strictly toral. If $\rho(\widehat{f})$ is a single vector that is neither rational nor totally irrational (for example $\{(a,0)\}$ with $a$ irrational) one can find examples which are annular (e.g. a rigid rotation) and others which are strictly toral (e.g. Furstenberg's example \cite{furstenberg}). We conjecture that strictly toral behavior is not possible when the rotation set is a single rational vector.
\begin{questionmain} Can a rational pseudo-rotation be strictly toral? \end{questionmain}
In \cite{kt-pseudo}, a partial result is obtained answering the above question negatively with some weak additional hypotheses.
Finally, we do not know if the bound on the size of inessential periodic disks in Theorem \ref{th:essine} is uniform (independent of the period):
\begin{questionmain} Is there a strictly toral nonwandering homeomorphism $f$ homotopic to the identity such that $\sup\{\mathcal{D}(U): U \text{ is a connected component of } \operatorname{Ine}(f) \} = \infty$? \end{questionmain}
Let us say a few words about the proofs. In \S\ref{sec:prelim} we introduce most of the notation and terminology, and we prove some basic results. Most of the burden of this article lies in the proof of Theorem \ref{th:bdfix}. For ease of the exposition, we prove Theorems \ref{th:essine}, \ref{th:chaotic}, \ref{th:reali} and Corollary \ref{coro:trans} assuming Theorem \ref{th:bdfix}. This is done in \S\ref{sec:essential}. The proof of Theorem \ref{th:bdfix} relies strongly on the equivariant version of Brouwer's plane translation theorem due to P. Le Calvez \cite{lecalvez-equivariant} and a recent result of O. Jaulent \cite{jaulent} that allows to apply the theorem of Le Calvez in more general contexts. We state these results in \S\ref{sec:brouwer}. Many ideas present here were used in \cite{lecalvez-equivariant} and \cite{lecalvez-hamiltonian} to study Hamiltonian homeomorphisms; in particular what we call \emph{gradient-like Brouwer foliations} (see \S\ref{sec:gradient}). The novelty is that we do not assume that the maps are Hamiltonian, and not even symplectic; and we use the Brouwer foliations in combination with the non-annularity to bound invariant open sets. A key concept that allows to do that is a linking number associated to a simply connected invariant set and a fixed point, which we introduce in \S\ref{sec:linking} together with some applications regarding open invariant sets of maps which have a gradient-like Brouwer foliation. We think these results may be useful by themselves in other contexts. To use these results in the proof of Theorem \ref{th:bdfix}, which is given in \S\ref{sec:bdfix}, we first assume that there exist arbitrarily large open connected inessential sets in $\T^2\setminus \fix(f)$ and that $\fix(f)$ is not fully essential. These two facts allow us to obtain a gradient-like Brouwer foliation, and then we use the results from \S\ref{sec:linking} and some geometric arguments to arrive to a contradiction.
\section{Preliminaries}\label{sec:prelim}
\subsection{Basic notation}
As usual we identify the torus $\T^2$ with the quotient $\R^2/\Z^2$, with quotient projection $\pi\colon \R^2\to \T^2$, which is the universal covering of $\T^2$.
Usually, if $f$ is a homeomorphism of $\T^2$, a lift of $f$ to the universal covering will be usually denoted by $\widehat{f}\colon \R^2\to \R^2$. If $f$ is isotopic to the identity, then $\widehat{f}$ commutes with the translations $z\mapsto z+v$, $v\in \R^2$, and so $\widehat{f}-\mathrm{Id}$ is uniformly bounded.
We write $\Z^2_* = \Z^2\setminus \{(0,0)\}$. Given $u,v\in \R^2$, their inner product is denoted by $\langle u, v \rangle$, and $\proj_v\colon \R^2\to \R$ denotes the projection $\proj_v(u) = \langle u, \frac{v}{\norm{v}} \rangle$. If $v=(a,b)$, we use the notation $v^\perp = (-b,a)$
By a \emph{topological disk} we mean an open set homeomorphic to a disk, and similarly a \emph{topological annulus} is an open set homeomorphic to an annulus.
An \emph{arc} on a surface $S$ is a continuous map $\gamma \colon [0,1]\to S$. If the endpoints $\gamma(0)$ and $\gamma(1)$ coincide, we say that $\gamma$ is a loop. We denote by $[\gamma]$ the set $\gamma([0,1])$, and if $\alpha, \beta$ are two arcs such that $\alpha(1)=\beta(0)$, we write $\alpha*\beta$ for the concatenation, i.e.\ the arc defined by $(\alpha*\beta)(t) = \alpha(2t)$ if $t\in [0,1/2]$ and $\beta(2t-1)$ if $t\in (1/2,1]$. We also use the notation $(\gamma(t))_{t\in [0,1]}$ to describe the arc $\gamma$.
If $\gamma\subset \T^2$ is a loop, we denote by $\gamma^*\in H_1(\T^2,\Z)$ its homology class, which in the case of $\T^2$ coincides with its free homotopy class. The first homology group $H_1(\T^2, \Z)$ can be identified with $\Z^2$ by the isomorphism that maps $v\in \Z^2$ to $\gamma^*$, where $\gamma$ is any arc joining $(0,0)$ to $v$.
If $f\colon \T^2\to \T^2$ is a homeomorphism isotopic\footnote{By a theorem of Epstein \cite{epstein}, this is equivalent to saying that $f$ is homotopic to the identity} to the identity, we denote by $\mathcal{I}=(f_t)_{t\in [0,1]}$ an isotopy from $f_0 = \mathrm{Id}_{\T^2}$ to $f_1=f$ (i.e.\ $t\mapsto f_t$ is an arc in the space of self-homeomorphisms of $\T^2$ joining the identity to $f$). Any such isotopy can be lifted to an isotopy $\widehat{\mathcal{I}}=(\widehat{f}_t)_{t\in [0,1]}$ where $\widehat{f}_t\colon \R^2$ to $\R^2$ is a lift of $f_t$ for each $t\in [0,1]$ and $\widehat{f}_0=\mathrm{Id}_{\R^2}$.
\subsection{Rotation set, irrotational homeomorphisms} \label{sec:rotation} The rotation set of a lift $\widehat{f}$ of a homeomorphism $f\colon \T^2\to \T^2$ homotopic to the identity is denoted by $\rho(\widehat{f})$ and defined as the set of all limit points of sequences of the form $$\left(\frac{\widehat{f}^{n_k}(x_k)-x_k}{n_k}\right)_{k\in \N},$$ where $x_k\in \R^2$, $n_k\in \N$ and $n_k\to \infty$ as $k\to \infty$.
For an $f$-invariant Borel probability measure $\mu$, the rotation vector associated to $\mu$ is defined as $\rho_\mu(\widehat{f}) = \int_{\T^2} \phi d\mu$, where $\phi\colon \T^2\to \R^2$ is the map defined by $\phi(x) = \widehat{f}(\widehat{x})-\widehat{x}$ for some $\widehat{x}\in \pi^{-1}(x)$ (which is independent of the choice). The next proposition collects some basic results about rotation vectors. The results are contained in \cite{m-z}.
\begin{proposition}\label{pro:rotation-set} The following properties hold: \begin{enumerate} \item $\rho(\widehat{f})$ is compact and convex; \item $\rho(\widehat{f}^n(x)+v) = n\rho(\widehat{f})+v$ for any $n\in \Z$ and $v\in \Z^2$. \item If $\mu$ is an $f$-ergodic Borel probability measure such that $\rho_\mu(\widehat{f})=w$, then for $\mu$-almost every point $x\in \T^2$ and any $\widehat{x}\in \pi^{-1}(x)$, $$\lim_{n\to \infty} \frac{\widehat{f}^n(\widehat{x})-\widehat{x}}{n}=w.$$ \item If $w\in \rho(\widehat{f})$ is an extremal point (in the sense of convex sets) then there is an $f$-ergodic Borel probability measure $\mu$ on $\T^2$ such that $\rho_\mu(\widehat{f})=w$. \end{enumerate} \end{proposition}
When the rotation set consists of a single vector, $f$ is said to be a \emph{pseudo-rotation}, and when this vector is an integer, $f$ is said to be \emph{irrotational}. Thus $f$ is irrotational if there is a lift $\widehat{f}$ such that $\rho(\widehat{f})=\{0\}$.
If $\mu$ is an ergodic measure and $\rho_\mu(\widehat{f})=v$, we say that the rotation vector $v$ is \emph{realized} by $\mu$. If $v=(p_1/q,p_2/q)$ is a rational vector in reduced form (i.e.\ with $p_1,p_2,q$ mutually coprime integers, $q>0$), then we say that $v$ is \emph{realized by a periodic orbit} if there is $z\in \R^2$ such that $$\widehat{f}^q(\widehat{z})-\widehat{z} = (p_1,p_2)$$ for any $\widehat{z}\in \pi^{-1}(z)$. Note that this implies that $f^q(z)=z$ and $\lim_{n\to\infty} (\widehat{f}^n(\widehat{z})-\widehat{z})/n=v$.
\subsection{Foliations} By an \emph{oriented foliation with singularities} $\mathcal{F}$ on a surface $S$ we mean a closed set $\operatorname{Sing}(\mathcal{F})$, called the set of singularities, together with an oriented topological foliation $\mathcal{F}'$ of $S\setminus \operatorname{Sing}(\mathcal{F})$. Elements of $\mathcal{F}'$ are oriented one-dimensional manifolds, and we call them \emph{regular leaves} of $\mathcal{F}$.
By a theorem of Whitney \cite{whitney, whitney2}, any such $\mathcal{F}$ can be embedded in a flow; i.e.\ $\mathcal{F}$ is the set of (oriented) orbits of some topological flow $\phi\colon S\times \R \to S$ (where the singularities of $\mathcal{F}$ coincide with the set of fixed points of $\phi$). Therefore one may define the $\alpha$-limit and $\omega$-limit of leaves of $\mathcal{F}$ in the usual way: if $\Gamma$ is a leaf of $\mathcal{F}$ and $z_0$ is a point of $\Gamma$, then $$\omega(\Gamma)= \bigcap_{n\geq 0}\overline{\{\phi(z_0, t) : t\geq n\}},\quad \alpha(\Gamma)= \bigcap_{n\leq 0}\overline{\{\phi(z_0, t) : t\leq n\}}.$$
We say that an arc $\gamma$ is \emph{positively transverse} to an oriented foliation with singularities $\mathcal{F}$ if $[\gamma]$ does not contain any singularity, and each intersection of $\gamma$ with a leaf of $\mathcal{F}$ is topologically transverse and ``from left to right''. More precisely: for each $t_0\in [0,1]$ there is a homeomorphism $h$ mapping a neighborhood $U$ of $\gamma(t_0)$ to an open set $V\subset \R^2$ such that $h$ maps the foliation induced by $\mathcal{F}$ in $U$ to a foliation by vertical lines oriented upwards, and such that the first coordinate of $t\mapsto h(\gamma(t))$ is increasing in a neighborhood of $t_0$.
\subsection{Essential, inessential, filled, and bounded sets}\label{sec:essential-set} We say an open subset $U$ of $\T^2$ is \emph{inessential} if every loop in $U$ is homotopically trivial in $\T^2$; otherwise, $U$ is \emph{essential}. An arbitrary set $E$ is called inessential if it has some inessential neighborhood. We also say that $E$ is \emph{fully essential} if $\T^2\setminus E$ is inessential.
If $E\subset \T^2$ is open or closed, its \emph{filling} is the union of $E$ with all the inessential connected components of $\T^2\setminus E$, and we denote it by $\operatorname{Fill}(E)$. If $E=\operatorname{Fill}(E)$ we say that $E$ is \emph{filled}.
A connected open set $A\subset \T^2$ is \emph{annular} if $\operatorname{Fill}(A)$ is homeomorphic to an open topological annulus. Note that $\operatorname{Fill}(A)$ is necessarily essential in this case. The following facts are easily verified, and we omit the proofs. \begin{proposition}\label{pro:essential-set} The following properties hold: \begin{enumerate} \item If $E\subset \T^2$ is fully essential and either open or closed, then exactly one connected component of $E$ is essential, and in fact it is fully essential. \item $\operatorname{Fill}(E)$ is inessential if so is $E$, fully essential if so is $E$, and neither one if $E$ is neither. \item An open set $U\subset \T^2$ has an annular component if and only if $U$ is neither inessential nor fully essential. \item An open connected set $U\subset \T^2$ is fully essential if and only if the map $\iota_U^*\colon H_1(U, \Z) \to H_1(\T^2,\Z)$ induced by the inclusion $\iota_U\colon U\to \T^2$ is surjective. \item If $E\subset \T^2$ is an open or closed set invariant by a homeomorphism $f\colon \T^2\to\T^2$, then $\operatorname{Fill}(E)$ is also $f$-invariant. \item Suppose $U$ is open and connected and $\widehat{U}$ is a connected component of $\pi^{-1}(U)$. Then \begin{itemize} \item $U$ is inessential if and only if $\widehat{U}\cap (\widehat{U}+v)=\emptyset$ for each $v\in \Z^2_*$; \item $U$ is annular if and only if there is $v\in \Z^2_*$ such that $\widehat{U}=\widehat{U}+v$ and $\widehat{U}\cap (\widehat{U}+kv^\perp)=\emptyset$ for all $k\neq 0$ \item $U$ is fully essential if and only if $\widehat{U}=\widehat{U}+v$ for all $v\in \Z^2$. \end{itemize} \end{enumerate} \end{proposition}
Given an arcwise connected set $E\subset \T^2$, let $\widehat{E}$ be a connected component of $\pi^{-1}(E)$. We denote by $\mathcal{D}(E)$ the diameter of $\widehat{E}$, reflecting the fact that this number is independent of the choice of the component $\widehat{E}$. If $\mathcal{D}(E)<\infty$, we say that $E$ is bounded, and we say that $E$ is unbounded otherwise. If $v\in \R^2_*$, we denote by $\mathcal{D}_v(E)$ the diameter of $\proj_v(\widehat{E})$, which is also independent of the choice of $\widehat{E}$. Let us state a fact for future reference. Its proof is also straightforward.
\begin{proposition}\label{pro:inessential-bound} If $K\subset \T^2$ is closed and inessential, then there is $M>0$ such that $\mathcal{D}(C)\leq M$ for each connected component $C$ of $K$. \end{proposition}
\subsection{Annular homeomorphisms}
Let $f\colon \T^2\to \T^2$ be a homeomorphism isotopic to the identity. We say that $f$ is \emph{annular} (with direction $v$) if there is some lift $\widehat{f}$ of $f$ to $\R^2$ and some $v\in \Z^2_*$ such that $$-M\leq \proj_{v^\perp}(\widehat{f}^n(x) - x) \leq M \quad \forall x\in \R^2,\, \forall n\in \Z.$$
The following facts will be frequently used. Their proofs are elementary and will be omitted for the sake of brevity. \begin{proposition} \label{pro:annular} The following properties hold: \begin{enumerate} \item \label{pro:annular0} If an open set $A\subset \T^2$ is annular, then $\diam(P_v(A))<\infty$ for some $v\in\Z^2_*$. \item \label{pro:annular2} If there is an $f$-invariant annular set, then $f$ is annular. \item \label{pro:annular3} If $f$ is annular with direction $v$, then $\rho(\widehat{f}) \subset \R v$ for some lift $\widehat{f}$ of $f$. \item \label{pro:annular4} If $f$ is nonwandering and $f^n$ is non-annular for all $n\in \N$, then any essential $f$-invariant open set is fully essential. \item \label{pro:annular5} If $f^n$ is annular for some $n\in \N$ and $f$ has a fixed point, then $f$ is annular. \end{enumerate} \end{proposition}
\begin{proposition}\label{pro:wall-annular} Suppose there is a lift $\widehat{f}$ of $f$ to $\R^2$ and an open $\widehat{f}$-invariant set $V\subset \R^2$ such that $$\proj_{v}^{-1}((-\infty, a))\subset V \subset \proj_v^{-1}((-\infty, b))$$ for some $a<b$. Then $f$ is annular. \end{proposition}
\subsection{Collapsing a filled inessential set}
The following proposition says that one can collapse the connected components of a filled compact inessential invariant set to points, while preserving the dynamics outside the given set. It will be convenient later on to simplify the sets of fixed points.
\begin{proposition}\label{pro:collapse} Let $K\subset \T^2$ be a compact inessential filled set, and $f\colon \T^2\to \T^2$ a homeomorphism such that $f(K)=K$. Then there is a continuous surjection $h\colon \T^2\to \T^2$ and a homeomorphism $f'\colon \T^2 \to \T^2$ such that \begin{itemize} \item $h$ is homotopic to the identity; \item $hf = f'h$; \item $K' = h(K)$ is totally disconnected;
\item $h|_{\T^2\setminus K}\colon \T^2\setminus K \to \T^2\setminus K'$ is a homeomorphism. \end{itemize} \end{proposition}
\begin{proof} Each connected component of $K$ is filled and inessential, so it is a \emph{cellular} continuum (i.e.\ it is an intersection of a nested sequence of closed topological disks). Let $\mathcal{P}$ be the partition of $\T^2$ into compact sets consisting of all connected components of $K$ together with all sets of the form $\{x\}$ with $x\in \T^2\setminus K$. Then $\mathcal{P}$ is an \emph{upper semicontinuous decomposition}: if $P\in \mathcal{P}$ and $U$ is a neighborhood of $P$, then there is a smaller neighborhood $V\subset U$ of $P$ such that every element of $\mathcal{P}$ that intersects $V$ is contained in $U$. This is a direct consequence of the fact that the Hausdorff limit of any sequence of connected components of $K$ must be contained in a connected component of $K$.
An improved version of a theorem of Moore, found in \cite{daverman} (Theorems 13.4 and 25.1) says that for such a decomposition (an upper semicontinuous decomposition of a manifold into cellular sets) one can find a homotopy from $(p_t)_{t\in [0,1]}$ from $\mathrm{Id}_{\T^2}$ to a closed surjection $p_1\colon \T^2\to \T^2$ such that $\mathcal{P} = \{p_1^{-1}(x) : x\in \T^2\}$. This implies that $h=p_1$ is homotopic to the identity, $h(K)$ is totally disconnected and $h|_{\T^2\setminus K}$ is a homeomorphism onto $\T^2\setminus h(K)$. The map $f'$ is well-defined by the equation $f'h=hf$ because $f$ permutes components of $K$, and it follows easily that $f'$ is a homeomorphism, completing the proof. \end{proof}
\subsection{Other results}
Let us state for future reference two well-known results. The first one is a version of the classical Brouwer's Lemma; see for example Corollary 2.4 of \cite{fathi}. \begin{proposition}\label{pro:brouwer-trivial} If an orientation-perserving homeomorphism $f\colon \R^2\to \R^2$ has a nonwandering point, then $f$ has a fixed point. \end{proposition}
The second result is due to Brown and Kister: \begin{theorem}[\cite{brown-invariant}]\label{th:brown-kister} Suppose $S$ is a (not necessarily compact) oriented surface and $f\colon S\to S$ an orientation-preserving homeomorphism. Then each connected component of $S\setminus \fix(f)$ is invariant. \end{theorem}
\section{Theorem \ref{th:essine} and applications} \label{sec:essential}
As usual, in this section $f$ denotes a homeomorphism of $\T^2$ homotopic to the identity.
We say that $x\in \T^2$ is an \emph{essential} point if $\bigcup_{k\in \Z} f^k(U)$ is essential for each neighborhood $U$ of $x$. If $x$ is not essential, we say that $x$ is \emph{inessential}. It follows from the definition that: \begin{itemize} \item The set $\operatorname{Ine}(f)$ of all inessential points is open; \item The set $\operatorname{Ess}(f)$ of all essential points is therefore closed; \item Both sets are $f$-invariant. \end{itemize}
\begin{remark} Note that $\operatorname{Ine}(f)$ coincides with the union of all inessential open invariant sets. This does not necessarily mean that $\operatorname{Ine}(f)$ is inessential: a trivial example would be the identity. One can think of less trivial examples where $\operatorname{Ine}(f)$ is essential, but they all seem to have some power with a very large fixed point set (namely, a fully inessential set of periodic points). Theorem \ref{th:essine} says that this is the only possibility, under the assumption that $f$ is non-wandering and $f^n$ is non-annular for all $n\in \N$. \end{remark}
\subsection{Proof of Theorem \ref{th:essine} (assuming Theorem \ref{th:bdfix})}
We will use Theorem \ref{th:bdfix}, the proof of which is postponed to the next sections.
First note that if $\fix(f^k)$ is essential for some $k$, then Theorem \ref{th:bdfix} applied to $f^k$ implies that either $f^k$ is annular, or $\fix(f^k)$ is fully essential and $f^k$ is irrotational. Thus to prove the theorem it suffices to consider $f$ such that \begin{itemize} \item $f^k$ is non-annular, and \item $\fix(f^k)$ is inessential \end{itemize} for all $k\in \N$. We will show under these hypotheses that case $(3)$ holds.
\setcounter{claim}{0} \begin{claim} Each $x\in \operatorname{Ine}(f)$ is contained in a bounded periodic topological disk. \end{claim} \begin{proof} If $\epsilon>0$ is small enough, $U_\epsilon = \bigcup_{k\in \Z} f^k(B_\epsilon(x))$ is inessential and $f$-invariant. Let $D_\epsilon$ be the connected component of $U_\epsilon$ containing $x$. Since $f$ is nonwandering and the components of $U_\epsilon$ are permuted by $f$, there is $k\geq 1$ such that $f^k(D_\epsilon)=D_\epsilon$ and $f^n(D_\epsilon)\cap D_\epsilon=\emptyset$ if $1\leq n<k$. Then $U=\operatorname{Fill}(D_\epsilon)$ is a periodic open disk. The fact that $U$ is bounded follows from Theorem \ref{th:bdfix} applied to $f^k$ (using the assumption that $\fix(f^k)$ is inessential). \end{proof}
\begin{claim} $\operatorname{Ess}(f)$ is fully essential. \end{claim} \begin{proof} Suppose not. Then $\operatorname{Ine}(f)$ is essential and open, and in particular $\operatorname{Ine}(f)$ contains some essential loop $\gamma$. By the previous claim and by compactness, there exist finitely many simply connected periodic bounded sets $U_1,\dots, U_j$ such that $[\gamma]\subset U_1\cup \cdots \cup U_j$ (and we may assume that each $U_i$ intersects $[\gamma]$). Thus we may find $M>0$ and $m\in \N$ such that $\mathcal{D}(U_i)\leq M$ and $f^m(U_i)=U_i$ for $1\leq i\leq j$. Let $g=f^m$, and choose a lift $\widehat{g}\colon \R^2\to \R^2$ of $g$. For each $i$, choose a connected component $\widehat{U}_i$ of $\pi^{-1}(U_i)$. Then there is $v_i\in \Z^2$ such that $\widehat{g}(\widehat{U}_i)=\widehat{U}_i+v_i$, and so $\widehat{g}^n(\widehat{U}_i) = \widehat{U}_i+nv_i$ for $n\in \Z$. Since $\diam(\widehat{U}_i)\leq M$, this implies that if $x\in U_i$ and $\widehat{x}\in \pi^{-1}(x)$, then $\norm{\widehat{g}^n(\widehat{x})-\widehat{x}-nv}\leq M$ (note that this does not depend on the choice of $\widehat{x}$, so we may use $\widehat{x}\in \widehat{U}_i$). If we define $\rho_x \doteq \lim_{n\to \infty} (\widehat{g}^n(\widehat{x})-\widehat{x})/n$, it follows that $\rho_x = v_i$ and this vector depends only on $x$ and the choice of the lift $\widehat{g}$. Since this works for any $x\in U_i$, it follows that the map $U_1\cup\cdots \cup U_j\to \Z^2$ defined by $x\mapsto \rho_x$ is locally constant. Since $U_1\cup \cdots \cup U_j$ is connected (because it contains $[\gamma]$ and every $U_i$ intersects $\gamma$) it follows that $\rho_x$ is constant on that set. Therefore $v_1=v_2=\cdots = v_j$, i.e.\ there is $v\in \Z^2$ such that $\widehat{g}(\widehat{U}_i) = \widehat{U}_i+v$ for $1\leq i\leq j$. Moreover, replacing $\widehat{g}$ by a suitable lift of $g$ we may assume that $v=0$.
Therefore we may assume that $\widehat{g}(\widehat{U}_i) = \widehat{U}_i$ for $1\leq i\leq j$. Thus, if $x\in [\gamma]$ and $\widehat{x}\in \pi^{-1}(x)$, then $\norm{\widehat{g}^n(\widehat{x})-\widehat{x}} \leq \max\{\diam(U_i) : 1\leq i\leq j\} \leq M$ for each $n\in \Z$. Let us show that this implies that $g=f^m$ is annular, contradicting our hypothesis: Since $\gamma$ is an essential loop, it lifts to $\R^2$ to a simple arc $\widehat{\gamma}$ joining a point $x\in \R^2$ to $x+w$, for some $w\in \Z^2_*$. Let $\Gamma = \bigcup_{k\in \Z} [\widehat{\gamma}]+kw$. Then $\proj_{w^\perp}(\Gamma)\subset [a,b]$ for some $a,b\in \R$, and since $\Gamma\subset \pi^{-1}([\gamma])$ we also have that $\norm{\smash{\widehat{g}^n(x)-x}}\leq M$ for each $x\in \Gamma$. If $V_0$ is the connected component of $\R^2\setminus \Gamma$ such that $\proj_{w^\perp}^{-1}((-\infty, a))\subset V_0$, then $V_0\subset \proj_{w^\perp}^{-1}((-\infty, b))$, and so $$\proj_{w^\perp}^{-1}((-\infty,a-M))\subset \widehat{g}^k(V_0)\subset \proj_{w^\perp}^{-1}((-\infty, b+M))$$ for each $k\in \Z$. Thus, letting $V=\bigcup_{k\in \Z} \widehat{g}^k(V_0)$, we have $$\proj_{w^\perp}^{-1}((-\infty,a-M))\subset V \subset \proj_{w^\perp}^{-1}((-\infty,b+M)),$$ and $V$ is $\widehat{g}$-invariant. By Proposition \ref{pro:wall-annular}, we conclude that $g$ is annular, which is the sought contradiction. \end{proof}
\begin{claim} Each component of $\operatorname{Ine}(f)$ is a periodic topological open disk. \end{claim} \begin{proof} Since $\operatorname{Ine}(f)$ is inessential, if $U$ is a connected component of $\operatorname{Ine}(f)$ then $U$ is inessential and $f^k$-invariant for some $k$ (because $f$ is nonwandering and $\operatorname{Ine}(f)$ is invariant). It follows that $\operatorname{Fill}(U)$ is open, inessential, filled (thus a topological disk) and $f^k$-invariant. Thus $\operatorname{Fill}(U)\subset \operatorname{Ine}(f)$, and since it is connected and intersects $U$, it follows that $\operatorname{Fill}(U)=U$, proving the claim. \end{proof}
\begin{claim} For each $k\in \N$ there is $M_k$ such that every connected component $U$ of $\operatorname{Ine}(f)$ such that $f^k(U)=U$ satisfies $\mathcal{D}(U)<M_k$. \end{claim} \begin{proof} This is a direct application of Theorem \ref{th:bdfix} to $f^k$, since we are under the assumption that $f^k$ is non-annular and $\fix(f^k)$ is inessential. \end{proof}
This last claim concludes the proof of Theorem \ref{th:essine}. \qed
\begin{corollary} \label{coro:fully} If $f\colon \T^2\to \T^2$ is homotopic to the identity, nonwandering and strictly toral (i.e.\ cases (1) and (2) of Theorem \ref{th:essine} do not hold), then \begin{itemize} \item for any essential point $x$, if $U$ is a neighborhood of $x$ then the set $U'=\bigcup_{n\in \Z} f^n(U)$ is connected and fully essential; \item $\operatorname{Ess}(f^k) = \operatorname{Ess}(f)$ for all $k\in \N$. \end{itemize} \end{corollary}
\begin{proof} It follows from Proposition \ref{pro:annular}(\ref{pro:annular4}) that $U'$ is fully essential. Since the connected components of $U'$ are permuted by $f'$, they are all homeomorphic to each other, and since one of them is fully essential, all of them must be. But two fully essential open sets cannot be disjoint, so there is only one component, as claimed.
For the second claim note that $\operatorname{Ess}(f^k)\subset \operatorname{Ess}(f)$ follows directly from the definition. On the other hand if $x\notin \operatorname{Ess}(f^k)$ then $x\in \operatorname{Ine}(f^k)$. This means that if $U$ is a small enough neighborhood of $x$, then $U'=\bigcup_{n\in \Z} f^{kn}(U)$ is inessential, so $U'\subset \operatorname{Ine}(f^k)$. On the other hand, if $i\in \Z$ then the $f^k$-orbit of $f^i(U)$ is $f^i(U')$, which is also inessential, so $f^i(U')\subset \operatorname{Ine}(f^k)$. This implies that $U'':=\bigcup_{i\in \Z} f^i(U') = \bigcup_{n\in \Z} f^n(U)\subset \operatorname{Ine}(f^k)$. But Theorem \ref{th:essine} applied to $f^k$ implies that $\operatorname{Ine}(f^k)$ is inessential, so that $U''$ is inessential as well, and we conclude that $x\in \operatorname{Ine}(f)$. Therefore, $\operatorname{Ine}(f^k)\subset \operatorname{Ine}(f)$, and so $\operatorname{Ess}(f^k)\supset \operatorname{Ess}(f)$, completing the proof. \end{proof}
\subsection{Proof of Theorem \ref{th:chaotic}} \label{sec:chaotic}
Assume that $f\colon \T^2\to \T^2$ is a nonwandering homeomorphism homotopic to the identity, $\widehat{f}$ is a lift of $f$ to $\R^2$, and $\rho(\widehat{f})$ has nonempty interior. This implies that $f$ is stricly toral, so that only case (3) in Theorem \ref{th:essine} holds.
Recall that $\mathcal{E}(f)$ as the set of all $x\in \T^2$ such that $\rho(\widehat{f},U)$ is a single vector of $\Q^2$ for some neighborhood $U$ of $x$, and $\mathcal{C}(f) = \T^2\setminus \mathcal{E}(f)$.
We want to show that \begin{enumerate} \item[(1)] $\mathcal{C}(f)=\operatorname{Ess}(f)$, which is a fully essential continuum and $\mathcal{E}(f)=\operatorname{Ine}(f)$ is a disjoint union of periodic bounded disks; \item[(2)] $\operatorname{Ess}(f)$ is externally transitive and sensitive on initial conditions; \item[(3)] For any $x\in \operatorname{Ess}(f)$ and any neighborhood $U$ of $x$, $\conv(\rho(\widehat{f},U))=\rho(\widehat{f})$; \end{enumerate}
Let us begin with the following \begin{claim*} For any $x\in \operatorname{Ess}(f)$ and any neighborhood $U$ of $x$, $\conv(\rho(\widehat{f},U))=\rho(\widehat{f})$. \end{claim*} \begin{proof}
Recall from \cite{m-z} that $\rho(\widehat{f})$ is convex, and if $v\in \rho(\widehat{f})$ is extremal (in the sense of convexity) then there is at least one point $z\in \R^2$ such that $(\widehat{f}^n(z)-z)/n\to v$ as $n\to \infty$. Let $x\in \operatorname{Ess}(f)$, and suppose for contradiction that $\conv(\rho(\widehat{f},U))\neq \rho(\widehat{f})$ for some neighborhood $U$ of $x$. Since the two sets are convex and compact, and $\conv(\rho(\widehat{f},U))\subset \rho(\widehat{f})$, this implies that there is a direction $w\in \R^2_*$ such that $\sup P_w(\rho(\widehat{f},U)) < \sup P_w(\rho(\widehat{f}))$. We will show that this is not possible. Observe that there must be an extremal point $v\in \rho(\widehat{f})$ such that $P_w(v) = \sup P_w(\rho(\widehat{f}))$. Since $v$ is extremal, as we mentioned there exists $z\in \R^2$ such that $(\widehat{f}^n(z)-z)/n\to v$ as $n\to \infty$.
Since $U$ is essential, $U'=\bigcup_{n\in \Z} f^n(U)$ is open, invariant, fully essential and connected (by Corollary \ref{coro:fully}). This implies that $\pi(z)$ is contained in some closed topological disk $D$ such that $\bd D\subset U'$. Since $\bd D$ is compact, there is $N\in \N$ such that $\bd D\subset \bigcup_{i=-N}^N f^i(U)$. Let $\widehat{D}$ be the connected component of $\pi^{-1}(D)$ that contains $z$. Since $P_w((\widehat{f}^n(z)-z)/n)\to P_w(v)$ as $n\to \infty$, if $z_n$ is chosen as a point of $\bd \widehat{D}$ such that $P_w(\widehat{f}^n(z_n)-z)$ is maximal then, as $\abs{P_w(z_n-z)}\leq \diam(\widehat{D})$,
$$P_w(\widehat{f}^n(z_n)-z_n)/n \geq P_w(\widehat{f}^n(z)-z)/n - P_w(z_n-z)/n \xrightarrow{n\to \infty} P_w(v).$$ Let $K$ be such that $\norm{\smash{\widehat{f}(y)-y}}\leq K$ for all $y\in \R^2$ (such $K$ exists because $\widehat{f}-\mathrm{Id}$ is $\Z^2$-periodic). Note that $\norm{\smash{\widehat{f}^n(y)-y}}\leq nK$. Thus we may choose a subsequence $(n_i)_{i\in \N}$ such that $n_i\to \infty$ and $(\widehat{f}^{n_i}(z_{n_i})-z_{n_i})/n_i$ converges to some limit $v'$ with $\norm{v'}\leq K$, and from our previous observations $P_w(v')\geq P_w(v)$. But also $P_w(v')\leq P_w(v)$, since we chose $v$ such that $P_w(v)=\sup P_w(\rho(\widehat{f}))$. Therefore $P_w(v')=P_w(v)$.
Observe that since $\pi(z_{n_i})\in \bd D$, we know that there is $k_i\in \Z$ with $-N\leq k_i\leq N$ such that $f^{k_i}(\pi(z_{n_i}))\in U$, so that if we let $x_i= \widehat{f}^{k_{n_i}}(z_{n_i})$ then $x_i\in \pi^{-1}(U)$. Thus, letting $m_i=n_i-k_i$, we have $$\frac{\widehat{f}^{m_i}(x_i) - x_i}{m_i} =\frac{n_i}{n_i-k_i}\cdot\frac{\widehat{f}^{n_i}({z_{n_i}})-z_{n_i}}{n_i} - \frac{\widehat{f}^{k_i}(z_{n_i})-z_{n_i}}{n_i}\xrightarrow{n\to \infty} v'$$ because $n_i\to \infty$, while $\abs{k_i}\leq N$ and $\norm{\smash{P_w(\widehat{f}^{k_i}(z_{n_i})-z_{n_i})}}\leq k_iK\leq NK$ for all $n\in \N$. By definition, this means that $v'\in \rho(\widehat{f},U)$. Since we already saw that $P_w(v')=P_w(v) = \sup P_w(\rho(\widehat{f}))$, this contradicts our assumption that $\sup P_w(\rho(\widehat{f},U)) < \sup P_w(\rho(\widehat{f}))$. This completes the proof of the claim. \end{proof}
To prove (1), observe that from the previous claim follows immediately that $\operatorname{Ess}(f)\subset \mathcal{C}(f)$. Thus, we need to prove that $\mathcal{C}(f)\subset \operatorname{Ess}(f)$, or which is the same that $\operatorname{Ine}(f)\subset \mathcal{E}(f)$. Let $x\in \operatorname{Ine}(f)$ then by Theorem \ref{th:essine} the connected component $U$ of $\operatorname{Ine}(f)$ that contains $x$ is a bounded periodic disk, so that $f^k(U)=U$ for some $k\in \N$. Thus if $\widehat{U}$ is a connected component of $\pi^{-1}(U)$, there is $v\in \Z^2$ such that $\widehat{f}^k(\widehat{U})=\widehat{U}+v$. This implies that $\norm{\smash{\widehat{f}^{nk}(z)-z-nv}} \leq \diam(\widehat{U})$ for all $z\in \widehat{U}$, so we easily conclude that $\rho(\widehat{f}^k, U)= \{v\}$, and then $\rho(\widehat{f},U)=\{v/k\}\subset \Q^2$. This shows that $x\in \mathcal{E}(f)$, as we wanted.
Therefore, we have proved (1) (since the claims about $\operatorname{Ess}(f)$ hold by Theorem \ref{th:essine}). Further, the previous claim together with (1) implies (3).
To prove (2), observe that the external sensitivity on initial conditions follows easily from (3), since it implies that if $U$ is a small ball around $x\in \mathcal{C}(f)$, and if $\widehat{U}$ is a connected component of $\pi^{-1}(U)$, then $\diam(\widehat{f}^n(U))\to \infty$ as $n\to \infty$. To prove the external transitivity, let $U_1,U_2$ be open sets in $\T^2$ intersecting $\mathcal{C}(f)=\operatorname{Ess}(f)$. Then from Corollary \ref{coro:fully} $U_i'= \bigcup_{n\in \Z} f^n(U_i)$ is fully essential and invariant, for $i\in \{1,2\}$. But two fully essential sets must intersect, so there are $n_1, n_2\in \Z$ such that $f^{n_1}(U_1)\cap f^{n_2}(U_2)\neq \emptyset$, so that $\widehat{f}^m(U_2)\cap U_1\neq \emptyset$, for $m=n_2-n_1\in \Z$. This completes the proof. \qed
\subsection{Proof of Theorem \ref{th:reali}} Let $f$ be homotopic to the identity, nonwandering and strictly toral, and let $\widehat{f}$ be a lift of $f$ to $\R^2$. We want to prove \begin{enumerate} \item[(1)] Any rational vector of $\rho(\widehat{f})$ that is realized by some periodic point is also realized by a periodic point in $\operatorname{Ess}(f)$). \item[(2)] If $\mu$ is an ergodic Borel probability measure $\mu$ with associated rotation vector $\rho_\mu(\widehat{f})\notin \Q^2$, then $\mu$ is supported on $\operatorname{Ess}(f)$. \end{enumerate}
We begin with (2): let $\mu$ be an $f$-ergodic Borel probability measure and $v=\rho_\mu(\widehat{f})\notin \Q^2$. For $\mu$-almost every point $x\in \T^2$, we have that if $\widehat{x}\in \pi^{-1}(x)$ then $(\widehat{f}^n(\widehat{x})-\widehat{x})/n\to v$ as $n\to \infty$ (see \S\ref{sec:rotation}). Since $v\notin \Q^2$, this implies that $\pi(x)\in \operatorname{Ess}(f)$, since otherwise by Theorem \ref{th:essine} it would belong to some periodic bounded disk $U$, and that would imply that $\rho(x)\in \Q^2$ (as in the proof of (1) in the previous section). Thus we conclude that $\mu$-almost every point is essential. Since $\operatorname{Ess}(f)$ is closed and invariant, it follows that the support of $\mu$ is in $\operatorname{Ess}(f)$, proving (2).
To prove (1) we will use a Lefschetz-Nielsen type index argument. Let $v=(p_1/q,p_2/q)\in \Q^2\cap \rho(\widehat{f})$ with $p_1,p_2,q$ coprime. Let $g=f^q$ and $\widehat{g}=\widehat{f}^q-(p_1,p_2)$ (which is a lift of $g$). Recall that $z\in \T^2$ is a periodic point realizing the rotation vector $v$ (for $\widehat{f}$) if for any $\widehat{z}\in \pi^{-1}(z)$ one has $\widehat{f}^q(\widehat{z})-\widehat{z} = v$. This is equivalent to saying that $\widehat{g}(\widehat{z}) = \widehat{z}$. Therefore, to prove (1) we need to show that if $\fix(\widehat{g})$ is nonempty, then $\pi(\fix(\widehat{g}))$ intersects $\operatorname{Ess}(f)$. By Corollary \ref{coro:fully} we have that $\operatorname{Ess}(g) = \operatorname{Ess}(f^k)=\operatorname{Ess}(f)$. Thus we want to show that if $\fix(\widehat{g})$ is nonempty then its projection to $\T^2$ contains a point of $\operatorname{Ess}(g)$.
Suppose on the contrary that $K:=\pi(\fix(\widehat{g}))\subset \operatorname{Ine}(g)$. Since $K$ is compact, there are finitely many connected components $U_1,\dots U_k$ of $\operatorname{Ine}(g)$ such that $K\subset U_1\cup \cdots \cup U_k$. Note that each $U_i$ is an open topological disk, and we may assume that each $U_i$ intersects $K\subset \fix(g)$, so $g(U_i)=U_i$ for each $i$.
We claim that $\fix(g)\cap \overline{U}_i\subset K$ for each $i\in \{1,\dots, k\}$. Indeed, suppose $x\in \fix(g)\cap \overline{U}_i$, and choose a connected component $\widehat{U}_i$ of $\pi^{-1}(U_i)$. If $(x_n)_{n\in \N}$ is a sequence in $U_i$ such that $x_n\to x$ as $n\to \infty$, and if $\widehat{x}_n\in \pi^{-1}(x_n)\cap \widehat{U}_i$, then the fact that $\diam(\widehat{U}_i)<\infty$ implies that we may find a sequence $(n_i)_{n\in \N}$ with $n_i\to \infty$ such that $\widehat{x}_{n_i}$ converges to some limit $\widehat{x}\in \cl(\widehat{U})$ as $i\to \infty$. Thus $\pi(\widehat{x}) = \lim_{i\to \infty} \pi(\widehat{x}_{n_i})= x$, and since $x\in \fix(g)$ it follows that $\widehat{g}(\widehat{x}) = \widehat{x}+w$ for some $w\in \Z^2$. But $\cl(\widehat{U})$ is bounded and $\widehat{g}$-invariant, and since $\widehat{g}^n(\widehat{x}) = \widehat{x}+nw$ we conclude that $w=0$. Hence $\widehat{x}\in \fix(\widehat{g})$, and $x\in K$, proving our claim.
In particular, since $K\subset U_1\cup\cdots \cup U_k$, we have that $\bd U_i$ contains no fixed points of $g$. Since $g$ is nonwandering, using a classic argument of Cartwright and Littlewood and the prime ends compactification of $U_i$, one may find a closed topological disk $D_i\subset U_i$ such that $\fix(g)\cap U_i\subset D_i$ and the fixed point index of $g$ in $D_i$ is $1$ (this is contained in Proposition 4.2 of \cite{koro}). Thus we can cover $K = \fix(g)\cap (U_1 \cup \cdots \cup U_k)$ with finitely many disjoint disks $D_1,\dots D_k$ such that the fixed point index of $g$ on each $D_i$ is $1$. Note that $K$ is a Nielsen class of fixed points (that is, it consists of all points which are lifted to fixed points of a same lift $\widehat{g}$ of $g$). We have just showed that the fixed point index of the Nielsen class $K$ is exactly $k\geq 1$ (one for each disk $D_i$, and there is at least one such disk). On the other hand, it is known (see, for instance, \cite{brown-nielsen}) that the fixed point index of a Nielsen class is invariant by homotopy, and since $f$ is homotopic to a map with no fixed points, the index should be $0$. Thus we arrived to a contradiction, completing the proof of (1). \qed
\subsection{Proof of Corollary \ref{coro:trans}} Let $f\colon \T^2\to \T^2$ be a strictly toral nonwandering homeomorphism. Suppose first that $f$ is not transitive. Note that the proof of external transitivity of $\operatorname{Ess}(f)$ given in the proof of Theorem \ref{th:chaotic} works in the general case where $f$ is strictly toral (without assuming anything about the rotation set). Hence $\operatorname{Ess}(f)$ is externally transitive. If $\operatorname{Ess}(f)=\T^2$, this would imply that $f$ is transitive, contradicting our hypothesis. Thus $\operatorname{Ine}(f)$ is nonempty, and the existence of a bounded periodic disk follows from Theorem \ref{th:essine}.
Now suppose that $f$ is transitive and assume for a contradiction that there is a periodic bounded disk of period $k$. Let $U_1,\dots, U_k$ be the components of the orbit of the disk, so that $f^k(U_i)=U_i$ for each $i$, and let $M$ be such that $\max_i \mathcal{D}(U_i)\leq M$. For each $i\in \{1,\dots, k\}$, let $\widehat{U}_i$ be a connected component of $\pi^{-1}(U_i)$, and let $\widehat{g}$ be a lift of $f^k$ such that $\widehat{g}(\widehat{U}_1)=\widehat{U}_1$ (and therefore $\widehat{g}(\widehat{U}_i)=\widehat{U}_i$ for each $i$). Note that $\cup_{i=1}^k\cl(U_i) = \T^2$. Thus, given $z\in \overline{U}_i$ we may choose $i$ such that $z\in \cl(U_i)$, and the fact that $U_i$ is bounded implies easily that some $\widehat{z}\in \pi^{-1}(z)$ belongs to $\cl(\widehat{U}_i)$. Hence, the $\widehat{g}$-orbit of $\widehat{z}$ has diameter bounded by $M$. This also holds for $\widehat{z}+v$ for any $v\in \Z^2$, and since $z\in \T^2$ was arbitrary we conclude that all $\widehat{g}$-orbits have diameter bounded by $M$. Since $\widehat{g}$ lifts $f^k$, this implies that $f^k$ is annular, contradicting the fact that $f$ is strictly toral.\qed
\section{Brouwer theory and gradient-like foliations}\label{sec:brouwer}
Let $S$ be an orientable surface (not necessarily compact), and let $\mathcal{I}=(f_t)_{t\in [0,1]}$ be an isotopy from $f_0=\mathrm{Id}_S$ to some homeomorphism $f_1=f$. If $\pi\colon \widehat{S}\to \widehat{S}$ is the universal covering of $S$, there is a natural choice of a lift $\widehat{f}\colon \widehat{S}\to \widehat{S}$ of $f$: Letting $\widehat{\mathcal{I}}=(\widehat{f}_t)_{t\in [0,1]}$ be the lift of the isotopy $\mathcal{I}$ such that $\widehat{f}_0=\mathrm{Id}_{\widehat{S}}$, one defines $\widehat{f}=\widehat{f}_1$. The lift $\widehat{f}$ has the particularity that it commutes with every Deck transformation of the covering.
A fixed point $p$ of $f$ is said to be \emph{contractible} with respect to the lift $\widehat{f}$ if the loop $(f_t(p))_{t\in [0,1]}$ is homotopically trivial in $S$. This definition does not depend on the isotopy, but only on the lift $\widehat{f}$. In fact, it is easy to see that the set of contractible fixed points of $f$ with respect to $\mathcal{I}$ coincides with $\pi(\fix(\widehat{f}))$.
Given an oriented topological foliation $\mathcal{F}$ of $S$, one says that the isotopy $\mathcal{I}$ is transverse to $\mathcal{F}$ if for each $x\in S$, the arc $(f_t(x))_{x\in [0,1]}$ is homotopic, with fixed endpoints, to an arc that is positively transverse to $\mathcal{F}$ in the usual sense. In this case, it is also said that $\mathcal{F}$ is dynamically transverse to $\mathcal{I}$.
The following is one statement of the equivariant version of Brouwer's Plane Translation Theorem: \begin{theorem}[Le Calvez \cite{lecalvez-equivariant}]\label{th:lecalvez} If there are no contractible fixed points, then there is a foliation without singularities $\mathcal{F}$ which is dynamically transverse to $\mathcal{I}$. \end{theorem}
Since the set of contractible fixed points is usually nonempty, one needs some additional modifications before using the previous theorem. This is done using a recent result of O. Jaulent.
\begin{theorem}[Jaulent, \cite{jaulent}] \label{th:jaulent} Given an isotopy $\mathcal{I}=(f_t)_{t\in [0,1]}$ from the identity to a homeomorphism $f\colon S\to S$, there exists a closed set $X\subset \fix(f)$ and an isotopy $\mathcal{I}' = (f_t')_{t\in [0,1]}$ from $\mathrm{Id}_{S\setminus X}$ to $f|_{S\setminus X}\colon S\setminus X\to S\setminus X$ such that \begin{enumerate} \item[(1)] for each $z\in S\setminus X$, the arc $(f_t'(z))_{t\in [0,1]}$ is homotopic with fixed endpoints (in $S$) to $(f_t(z))_{t\in [0,1]}$;
\item[(2)] there are no contractible fixed points for $f|_{S\setminus X}$ with respect to $\mathcal{I}'$. \end{enumerate} \end{theorem}
\begin{remark}\label{rem:jaulent-1} Due to the latter property, Theorem \ref{th:lecalvez} implies that there is a foliation $\mathcal{F}_X$ on $S\setminus X$ that is dynamically transverse to $\mathcal{I}'$. \end{remark}
\begin{remark}\label{rem:jaulent-2} If $X$ is totally disconnected, one can extend the isotopy $\mathcal{I}'$ to an isotopy on $S$ that fixes every element of $X$; that is, $f_t'(x)=x$ for each $x\in X$ and $t\in [0,1]$. Similarly, the foliation $\mathcal{F}_X$ can be extended to an oriented foliation with singularities $\mathcal{F}$ of $S$, where the set of singularities $\operatorname{Sing}(\mathcal{F})$ coincides with $X$. Moreover, after these extensions, if we consider the respective lifts $\widehat{\mathcal{I}}=(\widehat{f}_t)_{t\in [0,1]}$ and $\widehat{\mathcal{I}}'=(\widehat{f}_t')_{t\in [0,1]}$ of $\mathcal{I}$ and $\mathcal{I}'$ such that $\widehat{f}_0=\widehat{f}_0'=\mathrm{Id}_{\widehat{S}}$, then $\widehat{f}_1'=\widehat{f}_1$. This follows from the fact that if $z\in S\setminus X$, then $(f_t'(z))_{t\in [0,1]}$ is homotopic with fixed endpoints in $S$ to $(f_t(z))_{t\in [0,1]}$, so that the lifts of these paths with a common base point $\widehat{z}$ must have the same endpoint as well. \end{remark}
\begin{remark}\label{rem:jaulent-3} In the previous remark, if $\widehat{\mathcal{F}}$ is the lift of the extended foliation $\mathcal{F}$ (with singularities in $\widehat{X}=\pi^{-1}(X)$), then $\widehat{\mathcal{F}}|_{\widehat{S}\setminus \pi^{-1}(X)}$ is dynamically transverse to $\widehat{\mathcal{I}}'$; i.e.\ for any $\widehat{z}\in \widehat{S}\setminus \widehat{X}$ the path $(\widehat{f}'_t(\widehat{z}))_{t\in [0,1]}$ is homotopic with fixed endpoints in $\widehat{S}\setminus \widehat{X}$ to an arc $\widehat{\gamma}$ positively transverse to $\widehat{\mathcal{F}}$. In fact we know that, if $z=\pi(\widehat{z})$, then $(f'_t(z))_{t\in [0,1]}$ is homotopic with fixed endpoints in $S\setminus X$ to an arc $\gamma$ positively transverse to $\mathcal{F}$. The homotopy between $(f'_t(z))_{t\in [0,1]}$ and $\gamma$ can be lifted to a homotopy (with fixed endpoints, in $\widehat{S}\setminus \widehat{X}$) between $(\widehat{f}'_t(z))_{t\in [0,1]}$ and the lift $\widehat{\gamma}$ of $\gamma$ with base point $\widehat{z}$. One easily verifies that $\widehat{\gamma}$ is positively transverse to $\widehat{\mathcal{F}}$. \end{remark}
\subsection{Positively transverse arcs}
Let us state some general properties of dynamically transverse foliations that will be used in the next sections. This proposition is analogous to part of Proposition 8.2 of \cite{lecalvez-equivariant}, with small modifications. \begin{proposition}\label{pro:pta} Suppose $S$ is an orientable surface, $\mathcal{I}=(f_t)_{t\in [0,1]}$ an isotopy from the identity to a homeomorphism $f=f_1$ without contractible fixed points, and $\mathcal{F}$ a dynamically transverse foliation as given by Theorem \ref{th:lecalvez}. The following properties hold: \begin{enumerate} \item [(1)] For any $n\in \N$ and $x\in S$, there is a positively transverse arc joining $x$ to $f^n(x)$. \item [(2)] If $x$ and $y$ can be joined by a positively transverse arc, then there are neighborhoods $V$ of $x$ and $V'$ of $y$ such that every point of $V$ can be joined to every point of $V'$ by a positively transverse arc; \item [(3)] If $x$ is nonwandering, then there is a neighborhood of $V$ of $x$ such that every point of $V$ can be joined to every other point of $V$ by a positively transverse arc. \item [(4)] If $K\subset S$ is a connected set of nonwandering points, then any point of $K$ can be joined to each point of $K$ by a positively transverse arc. \end{enumerate} \end{proposition}
\begin{remark} Note that this proposition remains true in the context of Remarks \ref{rem:jaulent-2} and \ref{rem:jaulent-3} if one works in $S\setminus X$ or $\widehat{S}\setminus \widehat{X}$ with the corresponding foliations. \end{remark}
\begin{proof} The first claim is a consequence of the transversality of the foliation: we know that any $z\in S$ can be joined to $f(z)$ by some positively transverse arc $\gamma_z$, and so $\gamma_x^n = \gamma_x*\gamma_{f(x)}*\cdots*\gamma_{f^{n-1}(x)}$ is a positively transverse arc joining $x$ to $f^n(x)$.
\begin{figure}
\caption{Proofs of $(2)$ and $(3)$}
\label{fig:pta3}
\end{figure}
To prove (2) it suffices to consider flow boxes of $\mathcal{F}$ near $x$ and $y$ (see the left side of Figure \ref{fig:pta3}). Similarly, to prove (3) it suffices to show that if $x$ is nonwandering then there is a positively transverse arc joining $x$ to itself (see the right side of Figure \ref{fig:pta3}). Observe that due to (1) and (2), we can find neighborhoods $V$ of ${f}^{-1}(x)$ and $V'$ of $x$ such that every point of $V$ can be joined to every point of $V'$ by a positively transverse arc, and reducing $V'$ if necessary we may also find a neighborhood $V''$ of $f(x)$ such that every point of $V'$ can be joined to every point of $V''$ by a positively transverse arc. Since $x$ is nonwandering, we can find $y\in V'$ so close to $x$ that $f(y)\in V''$ and such that $f^n(y)\in V'\cap f(V)$ for some $n>0$ which we may assume large. Thus we can find a positively transverse arc from $x\in V'$ to $f(y)\in V''$, another from $f(y)$ to $f^{n-1}(y)$ (namely, $\gamma^{n-2}_{f(y)}$), and a third one from $f^{n-1}(y)\in V$ to $x\in V'$. Concatenation of these arcs gives a positively transverse arc from $x$ to itself, as we wanted.
Finally, to prove $(4)$, fix $x\in K$ and let $K'$ be the set of all points of $K$ which are endpoints of positively transverse arcs starting at $x$. From (2) follows that $K'$ is open in $K$, and from (3) we know that $x\in K'$. Moreover, $(3)$ also implies $K\setminus K'$ is open in $K$. Thus $K'$ is both open and closed in $K$, and the connectedness implies $K=K'$. \end{proof}
\subsection{Gradient-like foliations}\label{sec:gradient}
Let $\mathcal{F}$ be an oriented foliation with singularities of $\T^2$ such that $\operatorname{Sing}(\mathcal{F})$ is totally disconnected. A leaf $\Gamma$ of $\mathcal{F}$ is a \emph{connection} if both its $\omega$-limit and its $\alpha$-limit are one-element subsets of $\operatorname{Sing}(\mathcal{F})$. By a \emph{generalized cycle of connections} of $\mathcal{F}$ we mean a loop $\gamma$ such that $[\gamma]\setminus \operatorname{Sing}(\mathcal{F})$ is a disjoint (not necessarily finite) union of regular leaves of $\mathcal{F}$, with their orientation matching the orientation of $\gamma$.
Using the terminology of Le Calvez \cite{lecalvez-equivariant}, we say that a loop $\Sigma$ in $\T^2$ is a \emph{fundamental loop} for $\mathcal{F}$ if $\Sigma$ can be written as a concatenation of finitely many loops $\alpha_1,\dots,\alpha_n$ with a common base point $z_0$ such that, denoting by $\alpha_i^*\in H^1(\T^2,\Z)\simeq \Z^2$ the homology class of $\alpha_i$, \begin{itemize} \item each loop $\alpha_i$ is positively transverse to $\mathcal{F}$, \item $\sum_{i=1}^n \alpha_i^*=0$, and \item for each $\kappa \in H^1(\T^2,\Z)$ there are positive integers $k_1,\dots,k_n$ such that $\sum_{i=1}^n k_i \alpha_i^* = \kappa$. \end{itemize}
It is easy to see that if $\Sigma$ is a fundamental loop, then $\T^2\setminus [\Sigma]$ is a disjoint union of open topological disks.
If there exists a fundamental loop $\Sigma$ for $\mathcal{F}$, we say that $\mathcal{F}$ is \emph{gradient-like}. The key properties about gradient-like foliations that we will use are contained in the following
\begin{lemma} \label{lem:gradient} If $\mathcal{F}$ is a gradient-like foliation, then \begin{itemize} \item[(1)] every regular leaf of $\mathcal{F}$ is a connection, \item[(2)] there are no generalized cycles, and \item[(3)] there is a constant $M$ such that $\mathcal{D}(\Gamma)<M$ for each regular leaf $\Gamma$. \end{itemize} \end{lemma}
\begin{proof} We outline the proof, since the main ideas are contained in \S 10 of \cite{lecalvez-equivariant}. The difference here is that we do not have finitely many singularities. Let $\Sigma$ be a fundamental loop with base point $x_0$ as defined above, so $\T^2\setminus [\Sigma]$ is a disjoint union of simply connected open sets. After a perturbation of the arcs $\alpha_i$ to put them in general position, one may assume that $\T^2\setminus [\Sigma]$ has finitely many connected components, so it is a disjoint union of finitely many topological disks $\{D_i\}_{1\leq i\leq k}$.
A function $\Lambda$ is then defined on $\T^2 \setminus [\Sigma]$ by fixing a point $z_0\in \T^2\setminus [\Sigma]$ and letting $\Lambda(z)$ be the algebraic intersection number $\sigma\wedge \Sigma$ of any arc $\sigma$ joining $z_0$ to $z$ with $\Sigma$. This is independent of the choice of $\sigma$, because $\Sigma^*=0$. The function $\Lambda$ is constant on each disk $D_i$, and it has the property that if $\sigma$ is an arc joining $z\in \T^2\setminus [\Sigma]$ to $z'\in \T^2\setminus [\Sigma]$, then $\sigma\wedge \Sigma =\Lambda(z')-\Lambda(z)$. Note that $\Lambda$ attains at most $k$ different values (one for each disk $D_i$). The fact that $\Sigma$ is positively transverse to $\mathcal{F}$ implies that if $\Gamma\colon \R\to \T^2$ is any leaf of $\mathcal{F}$, then the map $t\mapsto \Lambda(\Gamma(t))$ (defined for all $t$ such that $\Gamma(t)\notin [\Sigma]$) is non-increasing, and it decreases after each $t$ such that $\Gamma(t)\in [\Sigma]$.
The proof of part (i) in Proposition 10.4 of \cite{lecalvez-equivariant} shows that $\mathcal{F}$ has no closed leaves: Suppose $\Gamma$ is a closed leaf of $\mathcal{F}$, and let $z$ be a point of $\Gamma$. Since there are no wandering points, by Proposition \ref{pro:pta} there is a positively transverse loop $\gamma$ based in $z$. This implies that $\Gamma\wedge \gamma<0$. On the other hand, there exist positive integers $a_1,\dots, a_n$ such that $-\gamma^*=a_1\alpha_1^*+\dots+a_n\alpha_n^*$, so that letting $\gamma'=\alpha_1^{a_1}*\cdots*\alpha_n^{a_n}$ one has $\Gamma\wedge\gamma'=\Gamma\wedge (-\gamma)=-\Gamma\wedge\gamma$. Thus $$0>\Gamma\wedge \gamma = -\Gamma\wedge \gamma' = -(a_1\Gamma\wedge \alpha_1+\cdots+a_n\Gamma\wedge\alpha_n) \geq 0,$$ where the latter inequality holds because $\alpha_i$ is a positively transverse arc and $a_i$ is a positive integer, for each $i$. This contradiction proves that $\mathcal{F}$ has no closed leaves.
To show that there is no cycle of connections, first observe that by definition if there is a cycle of connections, it contains a \emph{simple} cycle of connections; that is, a simple loop $\Gamma$ such that $[\Gamma]\setminus X$ consists of leaves of $\mathcal{F}$ with their orientation matching the orientation of $\Gamma$. But then, choosing $z\in [\Gamma]\setminus X$ we can repeat the previous argument by finding a positively transverse loop $\gamma$ based in $z$, and obtaining the same contradiction as before. This proves (2).
Recall that if $\Gamma\colon \R\to \T^2$ is a leaf of $\mathcal{F}$, the map $t\mapsto \Lambda(\Gamma(t))$ defined on $\R\setminus \Gamma^{-1}([\Sigma])$ is non-increasing, and it decreases after each $t$ such that $\Gamma(t)\in [\Sigma]$. Since $\Lambda$ attains at most $k$ different values (one for each disk $D_i$), it follows that $\Gamma$ intersects $\Sigma$ at most at $k$ points.
Let $\widehat{\mathcal{F}}$ be the lift of $\mathcal{F}$ to $\R^2$, and for each $i$, let $\widehat{D}_i$ be a connected component of $\pi^{-1}(D_i)$, and let $\mathcal{B} = \{\widehat{D}_i+v: v\in \Z^2, 1\leq i\leq k\}$. Then from the previous paragraph follows that any regular leaf $\widehat{\Gamma}$ of $\mathcal{\widehat{F}}$ intersects at most $k+1$ elements of $\mathcal{B}$. Let $d=\max\{\diam{\widehat{D}_i}:1\leq i\leq k\}$. Note that $\diam(D)\leq d$ for each $D\in \mathcal{B}$. We conclude from these facts that $\diam(\widehat{\Gamma})\leq M \doteq (k+1)d$, proving part (3).
Finally, part (1) follows from the following version of the Poincar\'e-Bendixson Theorem, which is a particular case of a theorem of Solntzev \cite{solntzev} (see also \cite[\S 1.78]{stepanov}) and can be stated in terms of continuous flows due to a theorem of Gutierrez \cite{gutierrez}. \begin{theorem} Let $\phi=\{\phi_t\}_{t\in \R}$ be a continuous flow on $\R^2$ with a totally disconnected set of singularities. If the forward orbit of a point $\{\phi_t(z)\}_{t\geq 0}$ is bounded, then its $\omega$-limit $\omega_{\phi}(z)$ is one of the following: \begin{itemize} \item A singularity; \item a closed orbit; \item a generalized cycle of connections. \end{itemize} \end{theorem}
Since, being an oriented foliation with singularities, $\widehat{\mathcal{F}}$ can be embeded in a flow (see \cite{whitney, whitney2}), we may apply the above theorem to $\widehat{\mathcal{F}}$. Since $\mathcal{F}$ has no generalized cycle of connections or closed leaves, neither does $\widehat{\mathcal{F}}$, and we conclude that the $\omega$-limit of every bounded leaf of $\widehat{\mathcal{F}}$ is a singularity (and similarly for the $\alpha$-limit). Since we already showed that every leaf is bounded, this proves (1), completing the proof of Lemma \ref{lem:gradient}. \end{proof}
\subsection{Existence of gradient-like Brouwer foliations} \label{sec:gradient-brouwer}
Throughout this section we assume that $f$ is a homeomorphism of $\T^2$ isotopic to the identity and $\widehat{f}$ is a lift of $f$ to $\R^2$ such that $\fix(\widehat{f})$ is totally disconnected, hence so is $\pi(\fix(\widehat{f}))$.
We observe that there exists an isotopy from the identity to $f$ that lifts to an isotopy from $\mathrm{Id}_{\R^2}$ to $\widehat{f}$: indeed, it suffices to choose any isotopy $(f_t)_{t\in [0,1]}$ from the identity to $f$ and its lift $(\widehat{f}_t)_{t\in [0,1]}$ such that $\widehat{f}_0=\mathrm{Id}_{\R^2}$. Noting that there is some $v\in \Z^2$ such that $\widehat{f}-\widehat{f}_1=v$, the isotopy from $\mathrm{Id}_{\T^2}$ to $f$ lifted by $(\widehat{f}_1+tv)_{t\in [0,1]}$ has the required property.
The next proposition is a direct consequence of Theorem \ref{th:jaulent} and the remarks that follow it.
\begin{proposition} \label{pro:brouwer} There exists an oriented foliation with singularities $\mathcal{F}$ of $\T^2$ and an isotopy $\mathcal{I}=(f_t)_{t\in [0,1]}$ from the identity to $f$ such that \begin{itemize} \item $\operatorname{Sing}(\mathcal{F})\subset \pi(\fix(\widehat{f}))$, \item $\mathcal{I}$ lifts to an isotopy from $\mathrm{Id}_{\R^2}$ to $\widehat{f}$, \item $\mathcal{F}$ is dynamically transverse to $\mathcal{I}$, and \item $\mathcal{I}$ fixes the singularities of $\mathcal{F}$. \end{itemize} \end{proposition}
Let $\mathcal{F}$ be the foliation from Proposition \ref{pro:brouwer}. Recall that for a loop $\gamma$ in $\T^2$, $\gamma^*$ denotes its homology class in $H_1(\T^2, \Z)\simeq \Z^2$. Fix $z\in \T^2\setminus X$, and consider the set $\mathcal{C}(z)$ of all homology classes $\kappa\in H^1(\T^2,\Z)$ such that there is a positively transverse loop $\gamma$ with $\gamma^* = \kappa$. Identifying $H^1(\T^2,\Z)$ with $\Z^2$ naturally and choosing $\widehat{z}\in \pi^{-1}(z)$, we see that $\mathcal{C}(z)$ coincides with the set of all $v\in \Z^2$ such that there is an arc in $\R^2$ positively transverse to the lifted foliation $\widehat{\mathcal{F}}$ joining $\widehat{z}$ to $\widehat{z}+v$. Note that $\mathcal{C}(z)$ is closed under addition: if $v,w\in \mathcal{C}(z)$ then $v+w\in \mathcal{C}(z)$.
The next proposition is contained in Lemma 10.3 and the first paragraph after its proof in \cite{lecalvez-equivariant}. The proof given there works without modifications in our context.
\begin{proposition}\label{pro:zerohull} If $f$ is nonwandering and the convex hull of $\mathcal{C}(z)$ is $\R^2$ for some $z\in \T^2$, then there is a fundamental loop. \end{proposition}
\begin{remark} \label{rem:zerohull} Note that $\R^2$ is the convex hull of $\mathcal{C}(z)$ if $0$ is in the interior of the convex hull of $\mathcal{C}(z)$, due to the fact that if $v\in \mathcal{C}(z)$ then $nv\in \mathcal{C}(z)$ for any $n\in \N$. Moreover, to show that the convex hull of $\mathcal{C}(z)$ contains $0$ in its interior it suffices to find $n$ positively transverse loops $\gamma_1,\dots,\gamma_n$ (not necessarily with base point $z$) such that $0$ is in the interior of the convex hull of $\{\gamma_1^*, \dots, \gamma_n^*\}$. In fact, note that if $z\notin X$, then using the fact that $f$ is nonwandering and $\T^2\setminus X$ is connected, Proposition \ref{pro:pta} implies that for each $i$ we may find positively transverse arcs $\sigma_i$ from $z$ to $\gamma_i(0)$ and $\sigma'$ from $\gamma_i(0)$ to $z$. For $m\in \N$, define $\eta_{i,m} = \sigma_i*\gamma_i^m*\sigma_i'$ for $1\leq i\leq n$. Then $\eta_{i,m}$ is a positively transverse arc with base point $z$, and $\eta_{i,m}^* = (\sigma_i*\sigma_i')^* + m\gamma_i^* = w_i+m\gamma_i^*$ where $w_i$ is independent of $m$. Since $0$ is in the interior of the convex hull of $\{\gamma_i^* : 1\leq i\leq n\}$, choosing $m$ large enough it follows easily that $0$ is in the interior of the convex hull of $\{\eta_{i,m}: 1\leq i\leq n\}\subset C(z)$, as claimed. \end{remark}
\section{Linking number of simply connected open sets}\label{sec:linking}
In this section we assume that $\widehat{\mathcal{I}}=(\widehat{f}_t)_{t\in [0,1]}$ is an isotopy from $\mathrm{Id}_{\R^2}$ to a homeomorphism $\widehat{f}\colon \R^2\to \R^2$, and $\widehat{X}$ is a closed set of fixed points of the isotopy $\widehat{\mathcal{I}}$, i.e.\ $\widehat{f}_t(p)=p$ for all $t\in [0,1]$ and $p\in \widehat{X}$.
\subsection{Winding number} Given $z\in \R^2$ and an arc $\gamma\colon [0,1]\to \R^2$ such that $z\notin [\gamma]$, we define a partial index as follows: consider the map $$\xi\colon [0,1]\to \mathbb{S}^1, \quad \xi(t) = \frac{\gamma(t)-z}{\norm{\gamma(t)-z}}$$ and let $\widetilde{\xi}\colon [0,1]\to \R$ be a lift to the universal covering, so that $e^{2\pi\widetilde{\xi}(t)} = \xi(t)$. Then we define $$I(\gamma,z) = \widetilde{\xi}(1)-\widetilde{\xi}(0).$$ This number does not depend on the choice of the lift $\widetilde{\xi}$ or the parametrization of $\gamma$ (preserving orientation). If $\gamma$ is a loop, then $I(\gamma,z)$ is an integer and coincides with the winding number of $\gamma$ around $z$. If $\gamma$ and $\gamma'$ are arcs with $\gamma(1)=\gamma'(0)$ and $z\notin [\gamma]\cup[\gamma']$, then $$I(\gamma*\gamma',z) = I(\gamma,z)+I(\gamma',z).$$ Additionally, $I(\gamma,z)$ is invariant by homotopies in $\R^2\setminus \{z\}$ fixing the endpoints of $\gamma$. A simple consequence of this fact is that if $I(\gamma,z)\neq 0$ and $\gamma$ is closed, then $z$ must be in a bounded connected component of $\R^2\setminus [\gamma]$.
\subsection{Linking number of periodic points}
\begin{notation} Given $z\in \R^2$, we denote by $\widehat{\gamma}_z$ the arc $(\widehat{f}_t(z))_{t\in [0,1]}$, and for $n\in \N$ we define $$\widehat{\gamma}_z^n = \widehat{\gamma}_z*\widehat{\gamma}_{f(z)}*\cdots*\widehat{\gamma}_{f^{n-1}(z)}.$$ \end{notation}
If $p\in \widehat{X}$ (so $p$ is fixed by $\widehat{\mathcal{I}}$) and $q$ is a periodic point of $\widehat{f}$, then we define the linking number $I_{\widehat{\mathcal{I}}}(q,p)\in \Z$ as follows. Let $k$ be the smallest positive integer such that $\widehat{f}^k(q)=q$. Observing that $\widehat{\gamma}_q^k$ is a loop, we let $$I_{\widehat{\mathcal{I}}}(q,p) = I(\widehat{\gamma}_q^k, p).$$
We will extend this definition, considering a periodic (possibly unbounded) simply connected set instead of the periodic point $q$.
\subsection{Linking number of open periodic simply connected sets}
\begin{definition}[and Claim]\label{def:index-U} Suppose $U\subset \R^2$ is a simply connected $\widehat{f}$-periodic open set and $p\in \widehat{X}\setminus U$ is given. Let $k$ be the smallest positive integer such that $\widehat{f}^k(U)=U$. Fix $z\in U$, and let $\sigma_z$ be an arc contained in $U$ and joining $\widehat{f}^k(z)$ to $z$. The \emph{linking number} of $U$ and $p$ is defined as $I_{\widehat{\mathcal{I}}}(U,p) = I(\widehat{\gamma}^k_z*\sigma_z, p)$. This number does not depend on the choice of $z$ or the arc $\sigma_z$ in $U$. \end{definition}
\begin{proof}[Proof of the claim] First observe that $I_{\mathcal{\widehat{I}}}(U,p)$ does not depend on the choice of $\sigma_z$ because if $\sigma'_z$ is any other arc in $U$ joining $\widehat{f}^k(z)$ to $z$, then $I(\sigma_z*(-\sigma'_z),p)=0$ because $p\notin U$, and $U$ is simply connected. Thus $I(\widehat{\gamma}^k_z*\sigma_z, p) =I(\widehat{\gamma}^k_z, p)+ I(\sigma_z,p) = I(\widehat{\gamma}^k_z, p)+I(\sigma_z',p) = I(\widehat{\gamma}^k_z*\sigma_z', p)$ as required.
Now let $z'$ be another point in $U$, and fix an arc $\eta$ in $U$ joining $z$ to $z'$.
We use the notation $\eta^s(t) = \eta|_{[0,s]}(st)$ (so $\eta^s$ is the sub-arc of $\eta$ from $\eta(0)$ to $\eta(s)$). Letting $\sigma_{z'}=(-\widehat{f}^k\circ\eta)*\sigma_z*\eta$, which is an arc in $U$ joining $\widehat{f}^k(z')$ to $z'$, we have a homotopy $$\left(\widehat{\gamma}^k_{\eta(s)}*(-\widehat{f}^k\circ\eta^s)*\sigma_z*\eta^s\right)_{s\in [0,1]}$$ from $\widehat{\gamma}^k_z*\sigma_z$ to $\widehat{\gamma}^k_{z'}*\sigma_{z'}$ in $\R^2\setminus \{p\}$, and therefore $$I(\widehat{\gamma}^k_z*\sigma_z,p)= I(\widehat{\gamma}^k_{z'}*\sigma_{z'},p),$$ proving the independence on the choice of $z$. \end{proof}
As a consequence of the independence on the choice of $z$ or $\sigma_z$ in the previous definition, we obtain the following
\begin{proposition}\label{pro:link-periodic} Let $U$ be the set from Definition \ref{def:index-U}, and suppose that there is $q\in U$ such that $\widehat{f}^k(q)=q$ (where $k$ is the smallest positive integer such that $\widehat{f}^k(U)=U$). Then $I_{\widehat{\mathcal{I}}}(U,p) = I_{\widehat{\mathcal{I}}}(q,p) = I(\widehat{\gamma}^k_q,p)$ for any $p\in \widehat{X}\setminus \widehat{U}$. \end{proposition}
The proof is immediate by using $z=q$ and the constant arc $\sigma_z(t)=z$ in Definition \ref{def:index-U}.
\subsection{A linking lemma} The following lemma is key in the proof of Theorem \ref{th:bdfix}; it is particularly useful when working with a gradient-like Brouwer foliation. Note that we are not assuming in this section that $\widehat{f}$ is a lift of a torus homeomorphism.
\begin{lemma}\label{lem:inter-link} Let $U\subset \R^2$ be an open simply connected $\widehat{f}$-periodic set, and assume that there are no wandering points of $\widehat{f}$ in $U$. Let $\widehat{\mathcal{F}}$ be an an oriented foliation with singularities of $\R^2$ such that $\widehat{X}=\operatorname{Sing}(\widehat{\mathcal{F}})$ and for each $z\in \R^2\setminus \widehat{X}$, the arc $\widehat{\gamma}_z$ is homotopic with fixed endpoints in $\R^2\setminus \widehat{X}$ to an arc positively transverse to the foliation.
Suppose $\Gamma$ is a leaf of $\widehat{\mathcal{F}}$ joining $p\in \widehat{X}\setminus U$ to $q\in \widehat{X}\setminus U$ and intersecting $\overline{U}$. Then either $I_{\mathcal{\widehat{I}}}(U,p)\neq 0$ or $I_{\mathcal{\widehat{I}}}(U,q)\neq 0$. \end{lemma} \begin{proof} Let $A$ be the annulus obtained by removing the points $p,q$ from the one-point compactification $\R^2\cup\{\infty\}$ of
$\R^2$; that is, $A = \R^2\cup\{\infty\}\setminus \{p,q\}$, and let $\tau\colon\widetilde{A}\to A$ be the universal covering. Note that the isotopy $\widehat{\mathcal{I}}|_{\R^2\setminus\{p,q\}}$ extends to $A$ by fixing the point at $\infty$, and this extension lifts to an isotopy $\widetilde{\mathcal{I}}=(\widetilde{f}_t)_{t\in [0,1]}$ from $\mathrm{Id}_{\widetilde{A}}$ to some map $\widetilde{f}=\widetilde{f}_1$, which commutes with the group of covering transformations $\operatorname{Deck}(\tau)$. The foliation $\widehat{\mathcal{F}}|_{\R^2\setminus \{p,q\}}$ also extends to $A$ by adding a singularity at $\infty$, and this extension lifts to a foliation $\widetilde{\mathcal{F}}$ of $\widetilde{A}$ with singularities in $\widetilde{X} = \tau^{-1}(X\cup\{\infty\}\setminus \{p,q\})$.
Because $\mathcal{F}$ is dynamically transverse to $\mathcal{I}$, one easily sees that $\widetilde{\mathcal{F}}$ is also dynamically transverse to $\widetilde{\mathcal{I}}$; i.e.\ , if $z\in \widetilde{A}$ is not fixed by $\widetilde{\mathcal{I}}$, then the arc $(\widetilde{f}_t(z))_{t\in[0,1]}$ is homotopic with fixed endpoints in $A\setminus \widetilde{X}$, to an arc positively transverse to $\widetilde{\mathcal{F}}$.
Let $\widetilde{U}$ be a connected component of $\tau^{-1}(U)$. Then $\widetilde{U}$ is simply connected, and $\tau|_{\widetilde{U}}$ is injective. Moreover, $\widetilde{f}^k(\widetilde{U})=T\widetilde{U}$ for some covering transformation $T\in \operatorname{Deck}(\tau)$, where $k$ is the least positive integer such that $f^k(U)=U$.
We will show that $T\neq \mathrm{Id}$. Suppose for contradiction that $\widetilde{f}^k(\widetilde{U})=\widetilde{U}$. Let $z\in [\Gamma]\cap \cl(U)$, choose $\widetilde{z}\in \tau^{-1}(z)$, and let $\widetilde{\Gamma}$ be the leaf of $\widetilde{\mathcal{F}}$ through $\widetilde{z}$ (so that $\tau(\widetilde{\Gamma})=\Gamma$).
From the fact that the $\omega$-limit and $\alpha$-limit of $\Gamma$ are $q$ and $p$, respectively, it follows that $\widetilde{\Gamma}$ is a proper embedding of $\R$ in $\widetilde{A}\simeq \R^2$. Thus $\widetilde{A}\setminus [\widetilde{\Gamma}]$ has exactly two connected components, and the fact that $\widetilde{\mathcal{F}}$ is dynamically transverse implies that $\Gamma$ is a Brouwer line; i.e.\ , $\widetilde{f}(\widetilde{\Gamma})$ and $\widetilde{f}^{-1}(\widetilde{\Gamma})$ belong to different connected components of $\widetilde{A}\setminus [\widetilde{\Gamma}]$. This implies that one of the connected components $V$ of $\widetilde{A}\setminus [\widetilde{\Gamma}]$ satisfies $\widetilde{f}(\cl{V})\subset V$. It follows from this fact that every point of $[\widetilde{\Gamma}]$ is wandering for $\widetilde{f}$; in particular $\widetilde{z}$ is wandering for $f$, so there is a neighborhood $W$ of $\widetilde{z}$ such that $\widetilde{f}^{n}(W)\cap W=\emptyset$ for all $n\in \N$. But $\tau(W)\cap U\neq \emptyset$, because $z= \tau(\widetilde{z})\in \bd U$; thus we can find $T'\in \operatorname{Deck}(\tau)$ such that $T'\widetilde{U}\cap W\neq \emptyset$ (see Figure \ref{fig:prelimi3}). Since $\widetilde{f}$ commutes with the Deck transformations, it follows that $\widetilde{f}^k(T'\widetilde{U})=T'\widetilde{U}$, and since $\tau|_{T'\widetilde{U}}$ is injective and $f^k$ has no wandering points in $U$, we conclude that $\widetilde{f}^k$ has no wandering points in $T'\widetilde{U}$, contradicting the fact that $\widetilde{U}\cap W\neq \emptyset$.
\begin{figure}
\caption{Proof of Lemma \ref{lem:inter-link}}
\label{fig:prelimi3}
\end{figure}
Thus $T\neq \mathrm{Id}$. Fix $\widetilde{z}\in \widetilde{U}$, and as in the previous section, let $\widetilde{\gamma}_{\widetilde{z}}$ denote the loop $(\widetilde{f}(\widetilde{z}))_{t\in [0,1]}$ and $\widetilde{\gamma}_{\widetilde{z}}^k = \widetilde{\gamma}_{\widetilde{z}}*\widetilde{\gamma}_{\widetilde{f}(\widetilde{z})}\cdots *\widetilde{\gamma}_{\widetilde{f}^{k-1}(\widetilde{z})}$. Choose any arc $\widetilde{\sigma}_{\widetilde{z}}$ in $T\widetilde{U}$ joining $\widetilde{f}^k(\widetilde{z})$ to $T\widetilde{z}$. Then letting $z=\tau(\widetilde{z})$ and $\sigma_z = \tau\circ \widetilde{\sigma}_z$, it follows that $\tau\circ(\widetilde{\gamma}^k_{\widetilde{z}}*\widetilde{\sigma}_z) = \widehat{\gamma}^k_z*\sigma_z$ is a homotopically nontrivial loop in $A$, since it lifts to a loop joining $\widetilde{z}$ to $T\widetilde{z}$. Of course it is still homotopically nontrivial in $A\setminus \{\infty\} = \R^2\setminus\{p,q\}$. This means that $I(\widehat{\gamma}^k_z*\sigma_z, p)\neq 0$ or $I(\widehat{\gamma}^k_z*\sigma_z,q)\neq 0$. Since $\sigma_z$ is an arc in $U$ joining $\widehat{f}^k(z)$ to $z$, it follows from the definition that $I_{\mathcal{\widehat{I}}}(U,p)\neq 0$ or $I_{\mathcal{\widehat{I}}}(U,q)\neq 0$, as claimed. \end{proof}
\begin{remark} Looking at the above proof in more detail, one may conclude the following more precise statement: There is $k>0$ such that $I(p,U) + I(q,U) = k$. To see this, we may choose a simple loop $\alpha$ in $\R^2$ that bounds a disk containing $p$ but not $q$, with $\alpha$ oriented clockwise, as a generator of $\operatorname{Deck}(A)$. That is, we may assume that $\operatorname{Deck}(A) = \{T_0^k : k\in \Z\}$ where $T_0$ is a covering transformation of $\tau$ such that $T_0(\widetilde{\alpha}(0))=\widetilde{\alpha}(1)$, where $\widetilde{\alpha}$ is any lift of $\alpha$ to $\widetilde{A}$. Further, we may choose $\alpha$ such that it is positively transverse to $\Gamma$. In this setting, when we conclude that $T\neq \mathrm{Id}$ in the proof above, the orientation of $\Gamma$ (from $p$ to $q$) implies that $T=T_0^k$ for some $k>0$. Therefore the loop $\widehat{\gamma}^k*\sigma_z$ is homotopic to $\alpha^k$ in $A$, so that $I(\widehat{\gamma}^k*\sigma_z,p)+I(\widehat{\gamma}^k*\sigma_z,q) = I(\alpha^k, p)+I(\alpha^k,q)$. One can conclude easily from this fact that $I(p,U)+I(q,U) = k$. \end{remark}
\subsection{Application to gradient-like foliations}
Let us assume in this subsection the same hypotheses of \S\ref{sec:gradient-brouwer}, i.e.\ $f\colon \T^2\to \T^2$ is a nonwandering homeomorphism homotopic to the identity with a totally disconnected set of fixed points and $\widehat{f}$ is a lift of $f$. Let $\mathcal{F}$ and $\mathcal{I}$ be the oriented foliation with singularities and the isotopy given by Proposition \ref{pro:brouwer}, so that \begin{itemize} \item $\operatorname{Sing}(\mathcal{F})\subset \pi(\fix(\widehat{f}))$, \item $\mathcal{I}$ lifts to an isotopy $\widehat{\mathcal{I}} = (\widehat{f}_t)_{t\in [0,1]}$ from $\mathrm{Id}_{\R^2}$ to $\widehat{f}$, \item $\mathcal{F}$ is dynamically transverse to $\mathcal{I}$, and \item $\mathcal{I}$ fixes the singularities of $\mathcal{F}$ (and $\widehat{\mathcal{I}}$ fixes the singularities of $\widehat{\mathcal{F}}$) \end{itemize}
We assume additionally that $\mathcal{F}$ is gradient-like. Denote $\widehat{\mathcal{F}}$ the lift of $\mathcal{F}$ to $\R^2$, and let $\widehat{X}=\operatorname{Sing}(\mathcal{F})$.
\begin{proposition}\label{pro:large-X} For each $k\in \N$, there is a constant $M_k$ such that if $U\subset \R^2$ is an open simply connected $\widehat{f}^k$-invariant set without wandering points and $\diam(U)>M_k$, then $U\cap \widehat{X}\neq \emptyset$. \end{proposition}
\begin{proof} Since $\mathcal{F}$ is gradient-like, there is a constant $M'$ such that every regular leaf $\Gamma$ of $\widehat{\mathcal{F}}$ connects two different elements of $\widehat{X}$ and $\diam(\Gamma)<M'$. Let $$M'' = \sup \left\{\norm{\smash{\widehat{f}_t(x)-\widehat{f}_s(x)}} : s,t\in [0,1], x\in [0,1]^2\right\}.$$ From the fact that $\widehat{f}$ commutes with integer translations, we have that $\diam([\widehat{\gamma}_x])\leq M''$ for any $x\in \R^2$.
Let $k$ be the smallest positive integer such that $\widehat{f}^k(U)=U$.
Since $\widehat{f}^k|_{U}$ is nonwandering (because $\widehat{f}|_U$ is), Proposition \ref{pro:brouwer-trivial} implies that $\widehat{f}^k$ has a fixed point $z$ in $U$. Define $$A=\left\{p\in \widehat{X}\setminus U: I_{\widehat{\mathcal{I}}}(z,p) \neq 0\right\}.$$ Observe that $A$ coincides with the set of all $p\in \widehat{X}\setminus U$ such that $I(\widehat{\gamma}^k_{z},p)\neq 0$, which is contained in the convex hull of $[\widehat{\gamma}^k_{z}]$. Since $\diam([\widehat{\gamma}^k_{z}])\leq \sum_{i=0}^{k-1} \diam([\widehat{\gamma}_{f^i(z)}]) \leq kM''$, we conclude that $A\subset B_{kM''}(z)$ (the ball of center $z$ and radius $kM''$).
On the other hand, by Proposition \ref{pro:link-periodic} it follows that $$A = \left\{p\in \widehat{X} : I_{\widehat{\mathcal{I}}}(U,p) \neq 0\right\}.$$ \begin{figure}
\caption{Proof of Proposition \ref{pro:large-X}}
\label{fig:appli1}
\end{figure} Suppose that $\diam(U)>M_k\doteq 2(kM''+M')$. We claim that $U$ intersects $\widehat{X}$. Suppose on the contrary that $U\cap \widehat{X}=\emptyset$. There is some point $x\in U\setminus \overline{B}_{kM''+M'}(z)$, and by our assumption $x\notin \widehat{X}$. See Figure \ref{fig:appli1}. The leaf $\Gamma$ of $\widehat{\mathcal{F}}$ through $x$ is such that $\diam(\Gamma)<M'$, and so its endpoints are two elements of $p,q$ of $\widehat{X}\setminus \overline{B}_{kM''}(z)$. Since $U\cap \widehat{X}=\emptyset$, Lemma \ref{lem:inter-link} implies that either $I_{\widehat{\mathcal{I}}}(U,p) \neq 0$ or $I_{\widehat{\mathcal{I}}}(U,q) \neq 0$, so that either $p$ or $q$ is in $A$. This contradicts the fact that $A\subset B_{kM''}(z)$. \end{proof}
As an immediate consequence we have the following \begin{corollary} Any $\widehat{f}$-periodic free topological disk without wandering points is bounded (by a bound that depends only on the period). \end{corollary}
\begin{proposition}\label{pro:inter-endpoint} If $U$ is an $\widehat{f}$-periodic simply connected open set intersecting $\widehat{X}$ and there are no wandering points of $\widehat{f}$ in $U$, then every leaf of $\widehat{\mathcal{F}}$ that intersects $\overline{U}$ has one endpoint in $U$. \end{proposition} \begin{proof} Let $k$ be the smallest positive integer such that $\widehat{f}^k(U)=U$, and let $z\in U\cap \widehat{X}$. Since $\widehat{f}^k(z)=z$, Proposition \ref{pro:link-periodic} implies that $I_{\widehat{\mathcal{I}}}(U,p)=I_{\widehat{\mathcal{I}}}(z,p)$ for any $p\in \widehat{X}\setminus U$. Since $z\in \widehat{X}$ is fixed by the isotopy, it follows that $I_{\widehat{\mathcal{I}}}(z,p)=0$ and therefore $I_{\widehat{\mathcal{I}}}(U,p)=0$ for any $p\in \widehat{X}\setminus U$.
Suppose that a regular leaf $\Gamma$ of $\widehat{\mathcal{F}}$ intersects $\overline{U}$, and let $p_1$ and $p_2$ be the endpoints of $\Gamma$. If neither $p_1$ nor $p_2$ is in $U$, then Lemma \ref{lem:inter-link} implies that $I_{\widehat{\mathcal{I}}}(U,p_i)\neq 0$ for some $i\in \{1,2\}$, contradicting our previous claim. Therefore, one of the two endpoints of $\Gamma$ belongs to $U$. \end{proof}
\section{A bound on invariant inessential open sets: Proof of Theorem \ref{th:bdfix}} \label{sec:bdfix}
This section is devoted to the proof of \begin{theorem*}[\ref{th:bdfix}] If $f\colon \T^2\to \T^2$ is a nonwandering non-annular homeomorphism homotopic to the identity then one and only one of the following properties hold: \begin{itemize} \item[(1)] There exists a constant $M$ such that each $f$-invariant open topological disk $U$ satisfies $\mathcal{D}(U)<M$; or \item[(2)] $\fix(f)$ is fully essential and $f$ is irrotational. \end{itemize} \end{theorem*}
Let us outline the steps of the proof of Theorem \ref{th:bdfix}. First we use the fact that $f$ is non-annular to show that if $\fix(f)$ is essential, then it is fully essential, and case $(2)$ holds. Next, assuming the theorem does not hold, we show that it suffices to consider the case where $\fix(f)$ is totally disconnected, by collapsing the components of the filling of $\fix(f)$. For such $f$, and assuming that there are arbitrarily `large' invariant open topological disks, we show that there is a gradient-like Brouwer foliation associated to a lift $\widehat{f}$ of $f$. Then we show that the invariant topological disks are bounded, as follows: if there is an unbounded invariant topological disk $U$, using the linking number defined in \S\ref{sec:linking}, and more specifically Proposition \ref{pro:inter-endpoint}, we have that every leaf of the foliation that intersects $\overline{U}$ has an endpoint in $U$. Using this fact and a geometric argument relying on the fact that $f$ is non-annular, we are able to conclude that the boundary of $U$ consists of singularities of the foliation (contradicting the fact that the set of singularities is totally disconnected). After this, we are able to obtain a sequence of pairwise disjoint bounded simply connected invariant sets with increasingly large diameter, and a variation of the previous argument leads again a contradiction.
\subsection{The case where $\fix(f)$ is essential}
\begin{proposition}\label{pro:fix-ess-irrotational} Under the hypotheses of Theorem \ref{th:bdfix}, suppose $\fix(f)$ is essential. Then $\fix(f)$ is fully essential, and there is a lift $\widehat{f}$ of $f$ such that $\pi(\fix{\widehat{f}})=\fix(f)$. Moreover, $\rho(\widehat{f})=\{0\}$ (i.e.\ $f$ is irrotational). \end{proposition} \begin{proof} If $\fix(f)$ is essential but not fully essential, then $\T^2\setminus \fix(f)$ is essential, and so it has some essential connected component $A$. The fact that $\fix(f)$ is essential implies that $A$ is not fully essential, and so it must be annular. Since connected components of $\T^2\setminus \fix(f)$ are permuted by $f$, $A$ is a fixed annular set for $f^k$ for some $k>0$, and so $f^k$ is annular by Proposition \ref{pro:annular}. Moreover, since $f$ has a fixed point, by the same Proposition we conclude that $f$ is annular. This contradicts our hypothesis.
Thus $\fix(f)$ is fully essential, and there is some fully essential connected component $K$ of $\fix(f)$. Fix $z_0\in K$ and let $\widehat{f}$ be a lift of $f$ such that $\widehat{f}(\widehat{z}_0)=\widehat{z}_0$ for any $\widehat{z}_0\in \pi^{-1}(z_0)$. We claim that $\pi^{-1}(K)\subset \fix(\widehat{f})$. Indeed, the map defined by $z\mapsto \widehat{f}(\widehat{z})-\widehat{z}$ for $\widehat{z}\in \pi^{-1}(z)$ is well defined on $\T^2$ and continuous, and it takes integer values on $K$. Since it is null at $z_0$ and $K$ is connected, it must be constantly zero on $K$. Thus $K\subset \pi(\fix(\widehat{f}))$ (so $\fix(f)$ is fully essential).
Let us prove that $f$ is irrotational. Suppose for contradiction that $\rho(\widehat{f})\neq \{0\}$. Then $\rho(\widehat{f})$ has some nonzero extremal point $w$, and so by Proposition \ref{pro:rotation-set}, there is an $f$-ergodic Borel probability measure $\mu$ on $\T^2$ such that $\mu$-almost every point $x\in \T^2$ is such that, if $\widehat{x}\in \pi^{-1}(x)$, then $$\lim_{n\to \infty} \frac{\widehat{f}^n(\widehat{x})-\widehat{x}}{n}= w.$$
By Poincar\'e recurrence, we may choose a recurrent $x\in \T^2$ such that the above condition holds. Let $\widehat{x}\in \pi^{-1}(x)$ and let $U$ be the connected component of $\R^2\setminus \fix(\widehat{f})$ that contains $\widehat{x}$. From Theorem \ref{th:brown-kister} we know that $\widehat{f}(U)=U$. Moreover, since $\pi(U)$ is disjoint from $K$, which is fully essential, we have that $\pi(U)$ is inessential, so $\pi|_U$ is injective.
Since $x$ is recurrent, there is a sequence $(n_k)_{k\in \N}$ of integers with $\lim_{k\to \infty} n_k=\infty$ such that $f^{n_k}(x)\to x$ as $k\to \infty$. Since $\pi|_U$ is injective, it conjugates $\widehat{f}|_{U}$ to $f|_{\pi(U)}$. In particular, $\widehat{f}^{n_k}(\widehat{x})\to \widehat{x}$ as $k\to \infty$. But then $(\widehat{f}^{n_k}(\widehat{x})-\widehat{x})/n_k\to 0\neq w$ as $k\to \infty$, contradicting our choice of $x$. This shows that $f$ is irrotational.
The claim that $\pi(\fix(\widehat{f}))=\fix(f)$ follows from the fact that $f$ is irrotational. \end{proof}
\begin{proposition} Under the hypotheses of Theorem \ref{th:bdfix}, if $\fix(f)$ is essential, then it is fully essential, and for each $M>0$ and $v\in \Z^2_*$ there is some connected component $U$ of $\T^2\setminus \fix(f)$ such that $\mathcal{D}_v(U)>M$. \end{proposition}
\begin{proof} The previous proposition implies that $\fix(f)$ is fully essential and $\fix(\widehat{f})=\pi^{-1}(\fix(f))$. Each connected component of $\R^2\setminus \fix(\widehat{f})$ is $\widehat{f}$-invariant. If there is a uniform bound on the diameter of such components, then one has a uniform bound on $|\widehat{f}^n(z)-z|$ for $z\in \R^2$, $n\in \Z$, contradicting the fact that $f$ is non-annular. \end{proof}
\subsection{The case where $\fix(f)$ is inessential} \setcounter{claim}{0} To complete the proof of Theorem \ref{th:bdfix} we will assume from now on that the theorem does not hold, and we will seek a contradiction. Thus we assume that there exists $f$ such that the hypotheses of the theorem hold but the thesis does not. The previous two propositions imply that $\fix{f}$ is essential if and only if case (2) of the theorem holds. Therefore, we may assume that $\fix{f}$ is inessential and item (1) does not hold. This means that for any $M$ there exists an open connected $f$-invariant topological disk $U$ such that $\diam(U)>M$.
\subsection{Fixing a lift $\widehat{f}$}
\begin{claim} There is a lift $\widehat{f}$ of $f$ and a sequence $(U_n)_{n\in \Z}$ of open $\widehat{f}$-invariant topological disks in $\R^2$ such that $\pi(U_n)$ is inessential and $\diam(U_n)\to \infty$ as $n\to \infty$. \end{claim}
\begin{proof} There are $f$-invariant topological disks of arbitrarily large diameter, and each contains a fixed point of $f$ by Proposition \ref{pro:brouwer-trivial}. The claim follows from the fact that only finitely many lifts of ${f}$ may have fixed points. \end{proof}
From now on we will work with the lift $\widehat{f}$ and the sequence $(U_n)_{n\in \N}$ from the previous claim.
\begin{claim}\label{claim:bdfix-nw} $U_n+v\subset \Omega(\widehat{f})$ for all $n\in \N$ and $v\in \Z^2$. \end{claim}
\begin{proof} Since $\pi(U_n+v) = \pi(U_n)$ is inessential, $\pi|_{U_n+v}$ is a homeomorphism onto its image which conjugates $\widehat{f}|_{U_n+v}$ to $f|_{\pi(U_n)}$. Since the latter is nonwandering, so is $\widehat{f}|_{U_n+v}$, implying that $U_n+v\subset \Omega(\widehat{f})$. \end{proof}
\subsection{Simplification of $\fix(f)$}
We will show that it is possible to assume that $\fix(f)$ is totally disconnected, by collapsing the connected components of $\operatorname{Fill}(\fix(f))$ to points, while keeping all the hypothesis. To do so, we need to rule out the possibility that this process leads to a situation where there are no longer arbitrarily large simply connected sets.
\begin{claim} For each $M\in \R$ there is an open connected $\widehat{f}$-invariant set $U\subset \R^2\setminus \fix(\widehat{f})$ such that $\pi(U)$ is inessential and $\diam(U)>M$. \end{claim} \begin{proof} Let $\mathcal{U}$ be the family of all open connected inessential subsets of $\T^2\setminus \pi(\fix(\widehat{f}))$ which are the projection of an $\widehat{f}$-invariant subset of $\R^2$. We want to show that $\sup_{V\in \mathcal{U}}\mathcal{D}(V)=\infty$. Suppose for contradiction that $\mathcal{D}(V)\leq M$ for all $V\in \mathcal{U}$.
Since $\diam(U_n)\to \infty$, we may find $v\in \Z^2_*$ such that $\diam(P_v(U_n))\to \infty$, and since we are assuming that $\fix(f)$ is inessential, there is a simple loop $\gamma\subset \T^2\setminus \fix(f)$ with homology class $v^\perp$, so that $\gamma$ lifts to an arc $\widehat{\gamma}$ joining a point $z_0$ to $z_0+v^\perp$ and disjoint from $\fix(\widehat{f})$. To simplify the notation, we will assume that $v = (1,0)$ and $[\widehat{\gamma}] = \{0\}\times[0,1]$. The general case is analogous (in fact we can reduce the general case to this case by conjugating $f$ by an appropriately chosen homeomorphism).
We will show that for any given $m>0$ we can find $m$ pairwise disjoint subarcs $\gamma_1,\dots,\gamma_m$ of $\gamma$ such that $[\gamma_i]\cap f([\gamma_i])\neq \emptyset$ for each $i\in \{1,\dots, m\}$. This is enough to complete the proof of the claim, because it leads to a contradiction as follows: Since $f$ has no fixed point in $\gamma$, we may choose $m$ such that $d(f(x),x))>1/m$ for each $x\in [\gamma]$. Since the arcs $\gamma_i$ are pairwise disjoint and $\gamma$ has length $1$ (because we are assuming it is a vertical circle), one of the arcs $\gamma_i$ has diameter at most $1/m$. Since $f(\gamma_i)$ intersects $\gamma_i$, it follows that there is a point $x\in [\gamma_i]$ such that $d(f(x),x)\leq 1/m$, contradicting our choice of $m$. This contradiction completes the proof, assuming the existence of the arcs $\gamma_i$. We devote the rest of the proof to prove the existence of such arcs.
Let $N_0\in \N$ be such that $N_0>M$, and denote $\Gamma=\{0\}\times \R$. If $m\in \N$ is fixed and $n$ is chosen large enough, then there is $i_0$ such that $U_n$ intersects $\Gamma+(N_0i,0)$ for each $i\in \{i_0,i_0+1,\dots, i_0+m+1\}$. Fix $i\in \N$ with $i_0< i \leq i_0+m$, and let $p_1\in U_n\cap (\Gamma+(N_0(i-1),0))$ and $p_2\in U_n\cap(\Gamma+(N_0(i+1),0))$. Then $p_1$ and $p_2$ are in different connected components of $U_n\setminus (\Gamma+(N_0i,0))$. From this and from the fact that $U_n$ is a topological disk, it is easy to verify that there is a connected component $\widehat{\gamma}_i$ of $U_n\cap (\Gamma+(N_0i,0))$ that separates $p_1$ from $p_2$ in $U_n$; that is, $U_n\setminus [\widehat{\gamma}_i]$ contains $p_1$ and $p_2$ in different connected components $V_1$ and $V_2$, respectively. Since $\widehat{\gamma_i}$ is a cross-cut of $U_n$ (i.e.\ a simple arc in $U_n$ joining two points of its boundary), we have $U_n\setminus [\widehat{\gamma}_i] = V_1\cup V_2$. Since $V_1\subset U_n$ intersects $\Gamma+(N_0(i-1),0)$ and has a point of $\Gamma+(N_0i,0)$ in its boundary, it follows that $\diam(V_1)\geq N_0>M$. Because of this, $V_1$ cannot be contained in $U_n\setminus \fix(\widehat{f})$: otherwise, the connected component of $U_n\setminus \fix(\widehat{f})$ that contains $V_1$ would be an element of $\mathcal{U}$ (since Theorem \ref{th:brown-kister} implies that it is $\widehat{f}$-invariant), contradicting our assumption that $\diam(V)\leq M$ for all $V\in \mathcal{U}$. Hence, $V_1$ contains a fixed point of $\widehat{f}$. Similarly, since $V_2\subset U_n$ intersects $\Gamma+(N_0(i+1),0)$ and its boundary intersects $\Gamma+(N_0i,0)$, we have $\diam(V_2)\geq N_0 > M$ and we conclude in the same way that $V_2$ contains a fixed point of $\widehat{f}$.
Therefore $\widehat{f}(V_1)\cap V_1\neq \emptyset$ and $\widehat{f}(V_2)\cap V_2\neq \emptyset$, and since $V_1\cup V_2 = U_n\setminus [\widehat{\gamma}_i]$ and $\widehat{f}|_{U_n}$ is nonwandering, it follows from these facts that $\widehat{f}([\widehat{\gamma}_i])\cap [\widehat{\gamma}_i]\neq \emptyset$. To complete the proof, observe that the arcs $\widehat{\gamma}_{i_0+1}, \dots, \widehat{\gamma}_{i_0+m}$ thus obtained project to pairwise disjoint subarcs $\gamma_1,\dots, \gamma_m$ of $\gamma$, because they are pairwise disjoint subarcs of $\pi^{-1}([\gamma])\cap U_n$, and $U_n$ projects to $\T^2$ injectively. \end{proof}
\begin{claim}\label{claim:bdfix13} We may assume that $\fix(f)$ is totally disconnected. \end{claim} \begin{proof} The previous claim implies that there exists a sequence $(\widehat{V}_n)_{n\in \N}$ of open connected $\widehat{f}$-invariant subsets of $\R^2\setminus \fix(\widehat{f})$ such that $\diam(\widehat{V}_n)\to \infty$ as $n\to \infty$ and $V_n=\pi(\widehat{V}_n)$ is inessential for each $n\in \N$. This implies that $V_n\subset \T^2\setminus \fix(f)$, because the fact that $V_n$ projects injectively implies that any element of $\pi^{-1}(\fix(f))\cap V_n$ must be a fixed point of $\widehat{f}$.
Since $\fix(f)$ is inessential, so is $K=\operatorname{Fill}(\fix(f))$. Moreover, by Proposition \ref{pro:inessential-bound} there is a uniform bound on $\mathcal{D}(C)$ among the connected components $C$ of $K$. Since $\mathcal{D}(V_n)\to \infty$, this implies that there is $n_0$ such that $V_n\subset \T^2\setminus K$ if $n\geq n_0$.
Proposition \ref{pro:collapse} implies that there is a continuous surjection $h\colon \T^2\to \T^2$ homotopic to the identity and a homeomorphism $f'\colon \T^2\to \T^2$ such that $hf = f'h$, and additionally $h(K)$ is totally disconnected ($h$ collapses components of $K$ to points) and $h|_{\T^2\setminus K}$ is a homeomorphism onto $\T^2\setminus h(K)$. Furthermore, since every component of $K$ contains a fixed point of $f$, and there are no fixed points outside $K$, it follows that $h(K)=\fix(f')$. The map $f'$ is clearly nonwandering, and the sets $(h(V_n))_{n\geq n_0}$ provide a sequence of simply connected open $f'$-invariant subsets of $\T^2\setminus \fix(f')$. Moreover, since $h$ is homotopic to the identity, if $\widehat{h}\colon \R^2\to \R^2$ is a lift of $h$ then there is a constant $M'$ such that $\norm{\smash{\widehat{h}(x)-x}}<M'$ for all $x\in \R^2$. If $\widehat{V}_n$ is a connected component of $\pi^{-1}(V_n)$, then $\widehat{h}(\widehat{V}_n)$ is a connected component of $\pi^{-1}(h(V_n))$ and $$\mathcal{D}(h(V_n))=\diam(\widehat{h}(\widehat{V}_n))\geq \diam(\widehat{V}_n) -2M = \mathcal{D}(V_n)-2M \xrightarrow[n\to \infty]{} \infty.$$
Hence $\mathcal{D}(\operatorname{Fill}(h(V_n)))\to \infty$ as $n\to \infty$, and since $\operatorname{Fill}(h(V_n))$ is an $f'$-invariant topological disk, we have that $f'$ satisfies the hypotheses but not the thesis of the theorem. Thus, by working with $f'$ instead of $f$ since the beginning of the proof of the theorem, we may have assumed that $\fix(f)$ is totally disconnected. \end{proof}
\subsection{Unboundedness in every direction of the sets $U_n$}
\begin{claim}\label{claim:bdfix5} $\diam(\proj_v(U_n)) \to \infty$ as $n\to \infty$ for each $v\in \Z^2_*$. \end{claim} \begin{proof} Suppose the claim does not hold. Then, after passing to a subsequence of $(U_n)_{n\in \N}$, we may assume that there is $v\in \Z^2_*$ and a constant $M$ such that $\diam(\proj_v(\overline{U_n}))\leq M$ for all $n\in \N$. Let $$A=\overline{\bigcup_{k\in \Z}\bigcup_{n\in \N} U_n+kv^\perp}.$$ The fact that $\diam(U_n)\to \infty$ implies that the sets $$V^-=\proj_v^{-1}((-\infty,-M))\text { and } V^+=\proj_v^{-1}((M, \infty))$$ are contained in different connected components of $\R^2\setminus A$. Let us call these components $\widetilde{V}^+$ and $\widetilde{V}^-$, respectively.
Note that since each $U_n$ is $\widehat{f}$-invariant, we have $\widehat{f}(A)=A$, and so the connected components of $\R^2\setminus A$ are permuted by $\widehat{f}$. The fact that $\widehat{f}(x)-x$ is uniformly bounded implies that $\widehat{f}(\widetilde{V}^+)=\widetilde{V}^+$ and $\widehat{f}(\widetilde{V}^-)=\widetilde{V}^-$. Since $V^-\subset \widetilde{V}^-$ and $\widetilde{V}^-$ is disjoint from $\widetilde{V}^+\supset V^+$, we have $$\proj_v^{-1}((-\infty,-M'))\subset \widetilde{V}^-\subset \proj_v^{-1}((-\infty, M')),$$ and we conclude from Proposition \ref{pro:wall-annular} that $f$ is annular. This contradicts the hypothesis of the theorem, proving the claim. \end{proof}
\subsection{Maximality and disjointness of $U_n$}
\begin{claim}\label{claim:bdfix7} We may assume that each $U_n$ is maximal with respect to the property of $\pi(U_n)$ being open, $f$-invariant and simply connected (i.e.\ $U_n$ is not properly contained in a set with the same properties). \end{claim} \begin{proof} By a direct application of Zorn's Lemma, there exists an open simply connected $f$-invariant set $\widetilde{U}_n'\subset \T^2$ such that $\pi(U_n)\subset \widetilde{U}_n'$ and $\widetilde{U}_n'$ is maximal with the property of being open, $f$-invariant, and simply connected. The connected component $\widetilde{U}_n$ of $\pi^{-1}(\widetilde{U}_n')$ that contains $U_n$ satisfies the required properties, so we may replace $U_n$ by $\widetilde{U}_n$ for each $n\in \N$. \end{proof}
\begin{claim} If $U_n$ and $U_m$ are bounded and $\pi(U_n)\cap \pi(U_m) \neq \emptyset$ then $\pi(U_n)=\pi(U_m)$. \end{claim} \begin{proof} If $\pi(U_n)\cap \pi(U_m)\neq \emptyset$, then there exists $w\in \Z^2$ such that $U_n\cap (U_m+w)\neq \emptyset$. Let $U=\operatorname{Fill}(U_n\cup (U_m+w))$, which is bounded and $\widehat{f}$-invariant. Suppose that $\pi(U)$ is essential. Then there is $v\in \Z^2_*$ such that $U\cap (U+v)\neq \emptyset$. Let $V= \bigcup_{k\in \Z} U+kv$. The fact that $U$ is bounded implies that $\diam \proj_{v^\perp}(V)<\infty$. However, $V$ is at the same time $\widehat{f}$-invariant. Similar to previous cases, an application of Proposition \ref{pro:wall-annular} now shows that $\widehat{f}$ is annular, contradicting the hypothesis of the theorem. Thus $\pi(U)$ is inessential, and since $U$ is filled and connected, $\pi(U)$ is an open $f$-invariant topological disk which contains $\pi(U_n)$ and $\pi(U_m)$. It follows form the maximality of $\pi(U_n)$ and $\pi(U_m)$ that $\pi(U_n)=\pi(U)=\pi(U_m)$, as claimed. \end{proof}
\begin{claim}\label{claim:bdfix14} We may assume that the disks $(\pi(U_n))_{n\in \N}$ are either pairwise disjoint or all equal to $\pi(U_0)$. \end{claim} \begin{proof} Assume first that $U_n$ is unbounded for some $n$. Then we may assume that $U_m=U_n$ for each $m\in \N$, and all the required hypotheses hold. Now assume that each $U_n$ is bounded. Since by our hypothesis $\diam(U_n)\to \infty$ as $n\to \infty$, for each $n\in \N$ we may find $m\in \N$ such that $\diam(U_m)>\diam(U_k)$ for all $k\in \{1,\dots,n\}$, so that $\pi(U_m)\neq \pi(U_k)$ for all $k\in \{1,\dots,n\}$. Using this fact, we may extract a subsequence of $(U_n)_{n\in \N}$ which projects to pairwise distinct disks, and these disks must be pairwise disjoint due to the preivous claim. \end{proof}
\subsection{Obtaining a gradient-like Brouwer foliation} Since from Claim \ref{claim:bdfix13} we are assuming that $\fix(f)$ is totally disconnected, by Proposition \ref{pro:brouwer} we know that there is an oriented foliation $\mathcal{F}$ with singularities $X=\operatorname{Sing}(\mathcal{F})$ and an isotopy $\mathcal{I}=(f_t)_{t\in [0,1]}$ from the identity to $f$ fixing $X$ pointwise, such that $\mathcal{I}$ lifts to the isotopy $\widehat{\mathcal{I}}=(\widehat{f}_t)_{t\in [0,1]}$ from the identity to $\widehat{f}$ and such that for any $z\in \T^2\setminus X$ the arc $(f_t(z))_{t\in [0,1]}$ is homotopic with fixed endpoints in $\T^2\setminus X$ to a positively transverse (to $\mathcal{F}$) arc. The set $\widehat{X} = \pi^{-1}(\widehat{X})\subset \fix(\widehat{f})$ is the set of singularities of $\widehat{\mathcal{F}}$ and is fixed by $\widehat{\mathcal{I}}$. For each $z\in \R^2\setminus \widehat{X}$, the arc $(\widehat{f}_t(z))_{t\in [0,1]}$ is homotopic with fixed endpoints in $\R^2\setminus \widehat{X}$ to a positively transverse (to $\widehat{\mathcal{F}}$) arc.
Our purpose now is to apply Proposition \ref{pro:zerohull} to show that $\mathcal{F}$ is gradient-like. In view of Remark \ref{rem:zerohull}, it suffices to find $v_1,v_2,v_3,v_4\in \Z^2_*$ such that $0$ is in the interior of the convex hull of $\{v_1,v_2,v_3,v_4\}$ and for each $i$ a positively transverse arc $\gamma_i$ in $\R^2\setminus \widehat{X}$ such that $\gamma_i(1)-\gamma_i(0)=v_i$. Indeed this would imply that $\mathcal{C}(z)$ contains $0$ in the interior of its convex hull for some $z\in \T^2$ and thus that $\mathcal{F}$ is gradient-like.
\begin{claim} For each $v\in \Z^2_*$ there is $x\in \R^2\setminus \widehat{X}$ and $w\in \Z^2_* \setminus \R v$ such that there are positively transverse arcs from $x$ to $x+w$ and from $x$ to $x-w$. \end{claim} \begin{proof} Since $\R^2\setminus \widehat{X}$ is connected, for any $z\in \R^2\setminus \widehat{X}$ we may find an arc $\gamma$ joining $z$ to $z+v$ in $\R^2\setminus \widehat{X}$. Let $\Gamma = \bigcup_{n\in \Z} [\gamma]+n{v}$. By Claim \ref{claim:bdfix5}, $\proj_{v^\perp}(U_n)\to \infty$. In particular, given $m\in \N$ there is $n=n_m$ such that $\proj_{v^\perp}(U_n)>\diam(\proj_{v^\perp}(\Gamma))+(m+1)\norm{v^\perp}$. It follows that $U_n+iv^\perp$ intersects $\Gamma$ for at least $m$ consecutive values of $i$. Thus there is a set $\{(i_1, j_1)$, \dots, $(i_m, j_m)\}\subset \Z^2$ such that $U_n+i_kv^\perp+j_kv$ intersects $[\gamma]$ for $1\leq k\leq m$ and $i_k = i_0+k$ for some $i_0\in \Z$. Choosing one point in each intersection $[\gamma]\cap (U_n+i_kv^\perp+j_kv)$, we get $m$ points in $[\gamma]$, and so by a pigeonhole argument two of them must be a distance less than $r_m = \sqrt{2}\diam[\gamma]/\floor{\sqrt{m}}$ apart (where $\floor{y}$ is the largest integer not greater than $y$). Thus one can find $x_m\in [\gamma]$ such that $B_{r_m}(x_m)$ intersects $U_n+i_kv^\perp+j_kv$ for two different values of $k$. Note that this implies that $B_{r_m}(x_m)$ intersects $U_n+u$ and $U_n+u'$ for two different elements $u,u'\in \Z^2$ such that $u'-u\notin\R v$ (because $i_k\neq i_{k'}$ if $k\neq k'$).
Letting $x$ be a limit point of $(x_m)_{m\in \N}$ one sees that for any $r>0$, there are arbitrarily large values of $n$ for which there are at least two different elements $u,u'\in \Z^2$ such that $B_r(x)$ intersects both $U_n+u$ and $U_n+u'$, and $u'-u\notin \R v$.
In particular, since $U_n+u\subset \Omega(\widehat{f})$ for all $u\in \Z^2$ and $n\in \N$, this implies that $x$ is nonwandering for $\widehat{f}$, so by Proposition \ref{pro:pta} there is a neighborhood $V_x$ of $x$ such that every point of $V_x$ can be joined to any other point of $V_x$ with a positively transverse arc. But then we can find $n\in \Z$ and $u,u'\in \Z^2$ such that $w=u'-u\notin \R v$ and $V_x$ intersects both $U_n+u$ and $U_n+u'$, so that $U_n+u'$ intersects both $V_x$ and $V_x+w$. If $z_0\in V_x\cap (U_n+u')$ and $z_1\in (V_x+w)\cap (U_n+u')$ then we can find a positively transverse arc $\sigma$ from $x$ to $z_1$ because of our choice of $V_x$, and we may find a positively transverse arc $\alpha$ from $z_0\in U_n+u'$ to $z_1 \in U_n+u'$ because $U_n+u'$ is connected and contained $\Omega(\widehat{f})$ (see Proposition \ref{pro:pta}). Finally, we may find a positively transverse arc $\eta$ from $z_1-w\in V_x$ to $x$, so that $\eta+w$ is a positively transverse arc from $z_1$ to $x+(u'-u)$. Therefore $\sigma*\alpha*(\eta+w)$ is a positively transverse arc $\widehat{\alpha}$ from $x$ to $x+w$. The same argument can be repeated in the opposite direction, obtaining a positively transverse arc $\widehat{\beta}$ from $x+w$ back to $x$, which translated by $-w$ provides a positively transverse arc from $x$ to $x-w$. This proves the claim. \end{proof}
As explained before its statement, the previous claim together with Proposition \ref{pro:zerohull} allow to conclude the following: \begin{claim} The foliation $\mathcal{F}$ is gradient-like. \end{claim}
\subsection{Linking of the sets $U_n$ and points of $\widehat{X}$}
Since $\mathcal{F}$ is gradient-like, every regular leaf $\Gamma$ of $\widehat{\mathcal{F}}$ connects two different elements of $\widehat{X}$ and there is a uniform bound $\diam(\Gamma)<M_0$.
\begin{claim} We may assume that $U_n\cap \widehat{X}\neq \emptyset$ for each $n\in \N$. \end{claim}
\begin{proof} By Claim \ref{claim:bdfix-nw}, $U_n$ has no wandering points. By Proposition \ref{pro:large-X} there is $M_1$ such that if $\diam(U_n)>M_1$, then $U_n\cap \widehat{X}\neq \emptyset$. Since $\diam(U_n)\to \infty$, by extracting a subsequence and re-indexing, we may assume that $U_n\cap \widehat{X}\neq \emptyset$ for all $n$. \end{proof}
\begin{claim}\label{claim:bdfix2} For any $n\in \N$ and $v\in \Z^2$, every regular leaf of $\widehat{\mathcal{F}}$ that intersects $\cl(U_n+v)$ has one endpoint in $U_n+v$. \end{claim} \begin{proof}
Since $U_n$ intersects $\widehat{X}$ and $\widehat{X}$ is $\Z^2$-invariant, it follows that $U_n+v$ intersects $\widehat{X}$, and the claim follows from Proposition \ref{pro:inter-endpoint} (recalling that $f|_{U_n+v}$ is nonwandering by Claim \ref{claim:bdfix-nw}). \end{proof}
\subsection{Boundedness of $U_n$}
\begin{claim}\label{claim:bdfix6} $U_n$ is bounded for each $n\in \N$. \end{claim}
Suppose for contradiction that $U=U_n$ is unbounded for some $n$. Then we may have assumed from the beginning of the proof of the theorem that $U=U_n$ for all $n$, since the hypotheses hold for that case. In particular, Claim \ref{claim:bdfix5} implies that $\diam \proj_v(U)=\infty$ for any $v\in \Z^2_*$. From now until the end of this subsection, we seek a contradiction to prove Claim \ref{claim:bdfix6}.
Let $W$ be the union of $U$ with all leaves of $\widehat{\mathcal{F}}$ that intersect $U$. Observe that $W$ is open.
\begin{claim} $W\cap (W+v)=\emptyset$ for each $v\in \Z^2_*$. \end{claim}
\begin{proof} Suppose for contradiction that this is not the case. Then some leaf $\Gamma$ of $\widehat{\mathcal{F}}$ intersects both $U$ and $U+v$. Thus $\Gamma$ joins a point $p\in U\cap \widehat{X}$ to a point $q\in (U\cap \widehat{X})+v$, where $0\neq v\in \Z^2$. Let $\gamma$ be the subarc of $\Gamma$ joining $p$ to $q$, and let $\sigma$ be any arc in $U+v$ joining $q$ to $p+v$. Then $\gamma*\sigma$ is an arc joining $p$ to $p+v$. Hence $\Theta=\bigcup_{n\in \Z} [\gamma*\sigma]+nv$ is a closed connected set that separates $\R^2$, and $\proj_{v^\perp}(\Theta)$ is bounded. The fact that $\diam \proj_{v^\perp}(U)=\infty$ implies that $U+mv^\perp$ intersects $\Theta$ for some $m\in \Z$, $m\neq 0$, and so $U+mv^\perp$ intersects $[\gamma*\sigma]+nv$ for some $n\in \Z$. But $U+mv^\perp$ is disjoint from $[\sigma]+nv \subset U+(n+1)v$, since otherwise $U+mv^\perp-(n+1)v$ would intersect $U$, contradicting the fact that $\pi(U)$ is inessential (noting that $mv^\perp-(n+1)v\neq 0$, since $m\neq 0$). Therefore $U+mv^\perp$ intersects $[\gamma]+nv$. But since $\Gamma+nv$ is a leaf of $\widehat{\mathcal{F}}$ and it contains $\gamma+nv$, it follows from Claim \ref{claim:bdfix2} that $\Gamma+nv$ has one endpoint in $U+mv^\perp$. On the other hand we know that the endpoints of $\Gamma+nv$ are $p+nv\in U+nv$ and $q+nv \in U+(n+1)v$, both of which are disjoint from $U+mv^\perp$ (since $m\neq 0$). This contradiction shows that $W\cap (W+v)=\emptyset$ for each $v\in \Z^2_*$.
\end{proof}
Now let $\mathcal{O}=\bigcup_{n\in \Z} \widehat{f}^n(W)$. Note that $\mathcal{O}$ is open, connected and $\widehat{f}$-invariant.
\begin{claim} $\mathcal{O}\cap (U+v)=\emptyset$ for each $v\in \Z^2_*$ \end{claim} \begin{proof} Indeed, since $W\cap (W+v)=\emptyset$, in particular $W\cap (U+v)=\emptyset$. Since $U+v$ is invariant, it follows that $\widehat{f}^k(W)\cap (U+v)=\emptyset$ for any $k\in \Z$, and the claim follows. \end{proof}
\begin{claim} $\mathcal{O}\cap (\mathcal{O}+v)=\emptyset$ for each $v\in \Z^2_*$ (i.e.\ $\pi(\mathcal{O})$ is inessential). \end{claim} \begin{proof} If $\mathcal{O}\cap (\mathcal{O}+v)\neq \emptyset$ and $v\neq 0$, since $\mathcal{O}$ is connected it contains an arc $\sigma$ joining some point $z\in \mathcal{O}$ to $z+v\in \mathcal{O}$. If $\Theta=\bigcup_{n\in \Z} [\sigma]+nv$, then $\Theta$ is bounded in the $v^\perp$ direction, and the fact that $\diam(P_{v^\perp}(U))=\infty$ implies that $U+mv^\perp$ intersects $\Theta$ for some $m\in \Z$, $m\neq 0$. But then $U+mv^\perp$ intersects $[\sigma]+nv$ for some $n\in \Z$, so that $U+mv^\perp -nv$ intersects $[\sigma]\subset \mathcal{O}$. Since $mv^\perp-nv\neq 0$, this contradicts the previous claim. \end{proof}
\begin{claim}\label{claim:bdfix8} If a regular leaf $\Gamma$ of $\widehat{\mathcal{F}}$ intersects $U$, then it is contained in $U$. \end{claim} \begin{proof} Let $\widetilde{\mathcal{O}} = \operatorname{Fill}(\mathcal{O})$. It follows from the properties of $\mathcal{O}$ that $\widetilde{\mathcal{O}}$ is simply connected, $\widehat{f}$-invariant, and $\pi(\widetilde{\mathcal{O}})$ is still inessential. Thus $\widetilde{\mathcal{O}}$ is a simply connected open $f$-invariant set that projects to an inessential set. Recalling that we are assuming (since Claim \ref{claim:bdfix7}) the maximality of $U$ with respect to these properties, and since $U\subset \widetilde{\mathcal{O}}$, we conclude that $\widetilde{\mathcal{O}}=U$. If a leaf $\Gamma$ of $\widehat{\mathcal{F}}$ intersects $U$, then by the definition of $W$ and $\widetilde{\mathcal{O}}$ it follows that $[\Gamma]\subset W\subset \mathcal{O}\subset \widetilde{\mathcal{O}} = U$, proving our claim. \end{proof}
\begin{claim}\label{claim:bdfix9} $\bd U\subset \widehat{X}$ \end{claim} \begin{proof} If this is not the case, there is some regular leaf $\Gamma$ of $\widehat{\mathcal{F}}$ such that $[\Gamma]\cap \bd U\neq \emptyset$. But Claim \ref{claim:bdfix2} implies that $\Gamma$ has one endpoint in $U$, and thus by our previous claim $\Gamma$ is entirely in $U$, a contradiction. \end{proof}
The last claim is the sought contradiction: since $\widehat{X}$ is totally disconnected, it cannot contain the boundary of a topological disk. This contradiction completes the proof of Claim \ref{claim:bdfix6}, $i.e.\ $ that $U_n$ is bounded for each $n$.
\subsection{End of the proof of Theorem \ref{th:bdfix}}
Now that we know that each $U_n$ is bounded, Claim \ref{claim:bdfix14} implies that the sets $\pi(U_n)_{n\in \N}$ are pairwise disjoint. To finish the proof, we will repeat the same arguments from the proof of Claim \ref{claim:bdfix6}, however with the difference that now the sets $U_n$ are bounded, so the proofs of the claims change (note that we could not have used these arguments prior to knowing that the sets $U_n$ are pairwise disjoint, for which we needed to know that $U_n$ is bounded).
Recall that we are assuming that \begin{itemize} \item The sets $(U_n)_{n\in \N}$ are maximal in the sense of Claim \ref{claim:bdfix7}. \item The sets $(\pi(U_n))_{n\in \N}$ are pairwise disjoint. \item $U_n\cap \widehat{X}\neq \emptyset$ for each $n\in \N$. \end{itemize}
Let $W$ be the union of $U_1$ with all leaves of $\widehat{\mathcal{F}}$ that intersect $U_1$. Observe that $W$ is open.
\begin{claim} $W\cap (W+v)=\emptyset$ for each $v\in \Z^2_*$. \end{claim}
\begin{proof} Suppose for contradiction that this is not the case. Then some leaf $\Gamma$ of $\widehat{\mathcal{F}}$ intersects both $U_1$ and $U_1+v$. Thus $\Gamma$ joins a point $p\in U_1\cap \widehat{X}$ to a point $q\in (U_1\cap \widehat{X})+v$, where $0\neq v\in \Z^2$. Let $\gamma$ be the subarc of $\Gamma$ joining $p$ to $q$, and let $\sigma$ be any arc in $U_1+v$ joining $q$ to $p+v$. Then $\gamma*\sigma$ is an arc joining $p$ to $p+v$. Letting $\Theta=\bigcup_{n\in \Z} [\gamma*\sigma]+nv$, the fact that $\diam \proj_{v^\perp}(U_n)\to \infty$ as $n\to \infty$ implies that for any sufficiently large $n$, there is $w$ such that $U_n+w$ intersects $\Theta$, and so $U_n+w$ intersects $[\gamma*\sigma]+kv$ for some $k\in \Z$. But $U_n+w$ is disjoint from $[\sigma]+kv \subset U_1+(k+1)v$, since otherwise $U_1+w-(k+1)v$ would intersect $U_n$, contradicting the fact that $\pi(U_1)\neq \pi(U_n)$. Therefore $U_n+w$ intersects $[\gamma]+kv$. But since $\Gamma+kv$ is a leaf of $\widehat{\mathcal{F}}$ and it contains $\gamma+kv$, it follows from Claim \ref{claim:bdfix2} that $\Gamma+kv$ has one endpoint in $U_n+w$. On the other hand we know that the endpoints of $\Gamma$ are $p+kv\in U_1+kv$ and $q+kv \in U_1+(k+1)v$, both of which are disjoint from $U_n+w$ (again, because $\pi(U_1)\neq \pi(U_n)$). This contradiction shows that $W\cap (W+v)=\emptyset$ for each $v\in \Z^2_*$. \end{proof}
Now let $\mathcal{O}=\bigcup_{n\in \Z} \widehat{f}^n(W)$. Note that $\mathcal{O}$ is open, connected and $\widehat{f}$-invariant.
\begin{claim} $\mathcal{O}\cap (U_1+v)=\emptyset$ for each $v\in \Z^2_*$ \end{claim} \begin{proof} Indeed, since $W\cap (W+v)=\emptyset$, in particular $W\cap (U_1+v)=\emptyset$. Since $U_1+v$ is invariant, it follows that $\widehat{f}^k(W)\cap (U_1+v)=\emptyset$ for any $k\in \Z$, and the claim follows. \end{proof}
\begin{claim} $\mathcal{O}\cap (\mathcal{O}+v)=\emptyset$ for each $v\in \Z^2_*$ (i.e.\ $\pi(\mathcal{O})$ is inessential). \end{claim}
\begin{figure}
\caption{$U_n$ has a translate intersecting $\Theta$ for at most two $n's$.}
\label{fig:claimO}
\end{figure} \begin{proof} If $\mathcal{O}\cap (\mathcal{O}+v)\neq \emptyset$ and $v\neq 0$, then $W\cap (\mathcal{O}+v)\neq \emptyset$, This means that there are leaves $\Gamma_1$ and $\Gamma_2$ of $\widehat{\mathcal{F}}$ such that $\Gamma_1$ has endpoints $p_1,q_1\in \widehat{X}$ with $p_1\in U_1$, and $\Gamma_2$ has endpoints $p_2,q_2\in \widehat{X}$ with $p_2\in U_1+v$, and there is an integer $k$ such that $f^k(\Gamma_2)\cap \Gamma_1\neq \emptyset$. Let $\gamma_1$ be a subarc of $\Gamma_1$ from $p_1$ to some intersection point $z\in f^k(\Gamma_2)\cap \Gamma_1$, and $\gamma_2$ a subarc of $f^k(\Gamma_2)$ from $z$ to $p_2$. Finally let $\sigma$ be an arc in $U_1+v$ joining $p_2$ to $p_1+v$.
Like in the previous arguments, the fact that $\diam(P_{v^\perp}(U_n))\to \infty$ as $n\to \infty$ implies that when $n$ is large enough, there is $w\in \Z^2$ such that $U_n+w$ intersects $[\gamma_1*\gamma_2*\sigma]+mv$ for some $m\in \Z$, and as before, the fact that $\pi(U_n)\neq \pi(U_1)$ implies that $U_n+w$ is disjoint from $[\sigma]+mv$, so $U_n+w$ intersects either $[\gamma_1]+mv$ or $[\gamma_2]+mv$. This means that, if $w'=w-mv$, then $U_n+w'$ intersects either $\Gamma_1$ or $f^k(\Gamma_2)$. But since $U_n+w'$ is invariant, this implies that $U_n+w'$ intersects either $\Gamma_1$ or $\Gamma_2$. Again, Claim \ref{claim:bdfix2} implies that one of the endpoints of $\Gamma_1$ or $\Gamma_2$ is in $U_n+w'$. Since $\pi(p_1)\in \pi(U_1)\notin \pi(U_n)$ and similarly $\pi(p_2)\notin \pi(U_n)$, it follows that one of the points $q_1$ or $q_2$ is in $U_n+w'$. Without loss of generality, suppose that $q_1\in U_n+w'$.
Repeating the previous argument with another (sufficiently large) integer $n'> n$, we conclude that one of the points $q_1$ or $q_2$ is in $U_{n'}+w''$ for some $w''\in \Z^2$. Since $q_1\in U_n+w'$ which is disjoint from $U_{n'}+w''$ (because $\pi(U_{n'})\neq \pi(U_n)$) we conclude that $q_2\in U_{n'}+w''$. See Figure \ref{fig:claimO}.
But repeating this argument a third time, with some $n''>n'$, we conclude that there is $w'''\in \Z^2$ such that $U_{n''}+w'''$ contains $q_1$ or $q_2$, and this is not possible since $U_{n''}$ is disjoint from $U_{n'}+w''$ and $U_n+w'$. This contradiction proves the claim.
\end{proof}
The next two steps are proved identically to Claims \ref{claim:bdfix8} and \ref{claim:bdfix9}. \begin{claim} If a regular leaf $\Gamma$ of $\widehat{\mathcal{F}}$ intersects $U_1$, then it is contained in $U_1$. \end{claim}
\begin{claim} $\bd U_1\subset \widehat{X}$ \end{claim}
Again, the last claim is a contradiction, since $\widehat{X}$ is totally disconnected. This concludes the proof of Theorem \ref{th:bdfix}.
\end{document} | arXiv |
Middle SchoolGrade 6Grade 7Grade 8
NarrativeScope and SequenceInstructional RoutinesGlossaryRequired MaterialsLessons and StandardsContributorsCorrections
Grade 6 begins with a unit on reasoning about area and understanding and applying concepts of surface area. It is common to begin the year by reviewing the arithmetic learned in previous grades, but starting instead with a mathematical idea that students haven't seen before sets up opportunities for students to surprise the teacher and themselves with the connections they make. Instead of front-loading review and practice from prior grades, these materials incorporate opportunities to practice elementary arithmetic concepts and skills through warm-ups, in the context of instructional tasks, and in practice problems as they are reinforcing the concepts they are learning in the unit.
One of the design principles of these materials is that students should encounter plenty of examples of a mathematical or statistical idea in various contexts before that idea is named and studied as an object in its own right. For example, in the first unit, students will generalize arithmetic by writing simple expressions like \(\frac12 bh\) and \(6s^2\) before they study algebraic expressions as a class of objects in the sixth unit. Sometimes this principle is put into play several units before a concept is developed more fully, and sometimes in the first several lessons of a unit, where students have a chance to explore ideas informally and concretely, building toward a more formal and abstract understanding later in the unit.
Unit 1: Area and Surface Area
Work with area in grade 6 draws on earlier work with geometry and geometric measurement. Students began to learn about two- and three-dimensional shapes in kindergarten, and continued this work in grades 1 and 2, composing, decomposing, and identifying shapes. Students' work with geometric measurement began with length and continued with area. Students learned to "structure two-dimensional space," that is, to see a rectangle with whole-number side lengths as composed of an array of unit squares or composed of iterated rows or iterated columns of unit squares. In grade 3, students distinguished between perimeter and area. They connected rectangle area with multiplication, understanding why (for whole-number side lengths) multiplying the side lengths of a rectangle yields the number of unit squares that tile the rectangle. They used area diagrams to represent instances of the distributive property. In grade 4, students applied area and perimeter formulas for rectangles to solve real-world and mathematical problems, and learned to use protractors. In grade 5, students extended the formula for the area of rectangles to rectangles with fractional side lengths.
In grade 6, students extend their reasoning about area to include shapes that are not composed of rectangles. Doing this draws on abilities developed in earlier grades to compose and decompose shapes, for example, to see a rectangle as composed of two congruent right triangles. Through activities designed and sequenced to allow students to make sense of problems and persevere in solving them (MP1), students build on these abilities and their knowledge of areas of rectangles to find the areas of polygons by decomposing and rearranging them to make figures whose areas they can determine (MP7). They learn strategies for finding areas of parallelograms and triangles, and use regularity in repeated reasoning (MP8) to develop formulas for these areas, using geometric properties to justify the correctness of these formulas. They use these formulas to solve problems. They understand that any polygon can be decomposed into triangles, and use this knowledge to find areas of polygons. Students find the surface areas of polyhedra with triangular and rectangular surfaces. They study, assemble, and draw nets for polyhedra and use nets to determine surface areas. Throughout, they discuss their mathematical ideas and respond to the ideas of others (MP3, MP6).
Because grade 6 students will be writing algebraic expressions and equations involving the letter \(x\) and \(x\) is easily confused with \(\times\), these materials use the "dot" notation, e.g., \(2 \boldcdot 3\), for multiplication instead of the "cross" notation, e.g., \(2 \times 3\). The dot notation will be new for many students, and they will need explicit guidance in using it.
Many of the lessons in this unit ask students to work on geometric figures that are not set in a real-world context. This design choice respects the significant intellectual work of reasoning about area. Tasks set in real-world contexts that involve areas of polygons are often contrived and hinder rather than help understanding. Moreover, mathematical contexts are legitimate contexts that are worthy of study. Students do have an opportunity at the end of the unit to tackle a real-world application (MP2, MP4).
In grade 6, students are likely to need physical tools in order to check that one figure is an identical copy of another where "identical copy" is defined as follows:
One figure is an identical copy of another if one can be placed on top of the other so that they match up exactly.
In grade 8, students will understand "identical copy of" as "congruent to" and understand congruence in terms of rigid motions, that is, motions such as reflection, rotation, and translation. In grade 6, students do not have any way to check for congruence except by inspection, but it is not practical to cut out and stack every pair of figures one sees. Tracing paper is an excellent tool for verifying that figures "match up exactly," and students should have access to this and other tools at all times in this unit. Thus, each lesson plan suggests that each student should have access to a geometry toolkit, which contains tracing paper, graph paper, colored pencils, scissors, and an index card to use as a straightedge or to mark right angles. Providing students with these toolkits gives opportunities for students to develop abilities to select appropriate tools and use them strategically to solve problems (MP5). Note that even students in a digitally enhanced classroom should have access to such tools; apps and simulations should be considered additions to their toolkits, not replacements for physical tools. In this grade, all figures are drawn and labeled so that figures that look congruent actually are congruent; in later grades when students have the tools to reason about geometric figures more precisely, they will need to learn that visual inspection is not sufficient for determining congruence. Also note that all arguments laid out in this unit can (and should) be made more precise in later grades, as students' geometric understanding deepens.
Progression of Disciplinary Language
In this unit, teachers can anticipate students using language for mathematical purposes such as comparing, explaining, and describing. Throughout the unit, students will benefit from routines designed to grow robust disciplinary language, both for their own sense-making and for building shared understanding with peers. Teachers can formatively assess how students are using language in these ways, particularly when students are using language to:
geometric patterns and shapes (Lesson 1)
strategies for finding areas of shapes (Lesson 3) and polygons (Lesson 11)
the characteristics of prisms and pyramids (Lesson 13)
the measures and units of 1-, 2-, and 3-dimensional attributes (Lesson 16)
representations of area and volume (Lesson 17)
how to find areas by composing (Lesson 3)
strategies used to find areas of parallelograms (Lesson 4) and triangles (Lesson 8)
how to determine the area of a triangle using its base and height (Lesson 9)
strategies to find surface areas of polyhedra (Lesson 14)
observations about decomposition of parallelograms (Lesson 7)
information needed to find the surface area of rectangular prisms (Lesson 12)
the features of polyhedra and their nets (Lesson 13)
the features of polyhedra (Lesson 15)
relationships among features of a tent and the amount of fabric needed for the tent (Lesson 19)
In addition, students are expected to justify claims about the base, height, or area of shapes, generalize about the features of parallelograms and polygons, interpret relevant information for finding the surface area of rectangular prisms, and represent the measures and units of 2- and 3-dimensional figures. Over the course of the unit, teachers can support students' mathematical understandings by amplifying (not simplifying) language used for all of these purposes as students demonstrate and develop ideas.
The table shows lessons where new terminology is first introduced, including when students are expected to understand the word or phrase receptively and when students are expected to produce the word or phrase in their own speaking or writing. Terms from the glossary appear bolded. Teachers should continue to support students' use of a new term in the lessons that follow where it was first introduced.
new terminology
6.1.1 area
6.1.2 compose
decompose
rearrange
6.1.3 shaded
6.1.4 parallelogram
opposite (sides or angles) quadrilateral
6.1.5 base (of a parallelogram or triangle)
6.1.6 horizontal
6.1.7 identical parallelogram
6.1.9 opposite vertex
6.1.10 vertex
6.1.11 polygon horizontal
6.1.12 face
surface area area
6.1.13 polyhedron
base (of a prism or pyramid)
three-dimensional polygon
6.1.15 prism
6.1.16 volume
quantity two-dimensional
6.1.17 squared
cubed
edge length
6.1.18 value (of an expression) squared
6.1.19 estimate
description surface area
Unit 2: Introducing Ratios
Work with ratios in grade 6 draws on earlier work with numbers and operations. In elementary school, students worked to understand, represent, and solve arithmetic problems involving quantities with the same units. In grade 4, students began to use two-column tables, e.g., to record conversions between measurements in inches and yards. In grade 5, they began to plot points on the coordinate plane, building on their work with length and area. These early experiences were a brief introduction to two key representations used to study relationships between quantities, a major focus of work that begins in grade 6 with the study of ratios.
Starting in grade 3, students worked with relationships that can be expressed in terms of ratios and rates (e.g., conversions between measurements in inches and in yards), however, they did not use these terms. In grade 4, students studied multiplicative comparison. In grade 5, they began to interpret multiplication as scaling, preparing them to think about simultaneously scaling two quantities by the same factor. They learned what it means to divide one whole number by another, so they are well equipped to consider the quotients \(\frac{a}{b}\) and \(\frac{b}{a}\) associated with a ratio \(a : b\) for non-zero whole numbers \(a\) and \(b\).
In this unit, students learn that a ratio is an association between two quantities, e.g., "1 teaspoon of drink mix to 2 cups of water." Students analyze contexts that are often expressed in terms of ratios, such as recipes, mixtures of different paint colors, constant speed (an association of time measurements with distance measurements), and uniform pricing (an association of item amounts with prices).
One of the principles that guided the development of these materials is that students should encounter examples of a mathematical concept in various contexts before the concept is named and studied as an object in its own right. The development of ratios, equivalent ratios, and unit rates in this unit and the next unit is in accordance with that principle. In this unit, equivalent ratios are first encountered in terms of multiple batches of a recipe and "equivalent" is first used to describe a perceivable sameness of two ratios, for example, two mixtures of drink mix and water taste the same or two mixtures of red and blue paint are the same shade of purple. Building on these experiences, students analyze situations involving both discrete and continuous quantities, and involving ratios of quantities with units that are the same and that are different. Several lessons later, equivalent acquires a more precise meaning (MP6): All ratios that are equivalent to \(a:b\) can be made by multiplying both \(a\) and \(b\) by the same non-zero number (note that students are not yet considering negative numbers).
This unit introduces discrete diagrams and double number line diagrams, representations that students use to support thinking about equivalent ratios before their work with tables of equivalent ratios.
Initially, discrete diagrams are used because they are similar to the kinds of diagrams students might have used to represent multiplication in earlier grades. Next come double number line diagrams. These can be drawn more quickly than discrete diagrams, but are more similar to tables while allowing reasoning based on the lengths of intervals on the number lines. After some work with double number line diagrams, students use tables to represent equivalent ratios. Because equivalent pairs of ratios can be written in any order in a table and there is no need to attend to the distance between values, tables are the most flexible and concise of the three representations for equivalent ratios, but they are also the most abstract. Use of tables to represent equivalent ratios is an important stepping stone toward use of tables to represent linear and other functional relationships in grade 8 and beyond. Because of this, students should learn to use tables to solve all kinds of ratio problems, but they should always have the option of using discrete diagrams and double number line diagrams to support their thinking.
When a ratio involves two quantities with the same units, we can ask and answer questions about ratios of each quantity and the total of the two. Such ratios are sometimes called "part-part-whole" ratios and are often used to introduce ratio work. However, students often struggle with them so, in this unit, the study of part-part-whole ratios occurs at the end. (Note that tape diagrams are reserved for ratios in which all quantities have the same units.) The major use of part-part-whole ratios occurs with certain kinds of percentage problems, which comes in the next unit.
On using the terms ratio, rate, and proportion. In these materials, a quantity is a measurement that is or can be specified by a number and a unit, e.g., 4 oranges, 4 centimeters, "my height in feet," or "my height" (with the understanding that a unit of measurement will need to be chosen). The term ratio is used to mean an association between two or more quantities and the fractions \(\frac{a}{b}\) and \(\frac{b}{a}\) are never called ratios. Ratios of the form \(1 : \frac{b}{a}\) or \(\frac{a}{b} : 1\) (which are equivalent to \(a : b\)) are highlighted as useful but \(\frac{a}{b}\) and \(\frac{b}{a}\) are not identified as unit rates for the ratio \(a : b\) until the next unit. However, the meanings of these fractions in contexts is very carefully developed. The word "per" is used with students in interpreting a unit rate in context, as in "\$3 per ounce," and "at the same rate" is used to signify a situation characterized by equivalent ratios.
In the next unit, students learn the term "unit rate" and that if two ratios \(a : b\) and \(c : d\) are equivalent, then the unit rates \(\frac{a}{b}\) and \(\frac{c}{d}\) are equal.
The terms proportion and proportional relationship are not used anywhere in the grade 6 materials. A proportional relationship is a collection of equivalent ratios, and such collections are objects of study in grade 7. In high school—after their study of ratios, rates, and proportional relationships—students discard the term "unit rate," referring to \(a\) to \(b\), \(a:b\), and \(\frac{a}{b}\) as "ratios."
In this unit, teachers can anticipate students using language for mathematical purposes such as interpreting, explaining, and comparing. Throughout the unit, students will benefit from routines designed to grow robust disciplinary language, both for their own sense-making and for building shared understanding with peers. Teachers can formatively assess how students are using language in these ways, particularly when students are using language to:
ratio notation (Lesson 1)
different representations of ratios (Lesson 6)
situations involving equivalent ratios (Lesson 8)
situations with different rates (Lesson 9)
tables of equivalent ratios (Lessons 11 and 12)
questions about situations involving ratios (Lesson 17)
features of ratio diagrams (Lesson 2)
reasoning about equivalence (Lesson 4)
reasoning about equivalent rates (Lesson 10)
reasoning with reference to tables (Lesson 14)
reasoning with reference to tape diagrams (Lesson 15)
situations with and without equivalent ratios (Lesson 3)
representations of ratios (Lessons 6 and 13)
situations with different rates (Lessons 9 and 12)
situations with same rates and different rates (Lesson 10)
representations of ratio and rate situations (Lesson 16)
In addition, students are expected to describe and represent ratio associations, represent doubling and tripling of quantities in a ratio, represent equivalent ratios, justify whether ratios are or aren't equivalent and why information is needed to solve a ratio problem, generalize about equivalent ratios and about the usefulness of ratio representations, and critique representations of ratios.
6.2.1 ratio
___ to ___
___ for every ___
6.2.2 diagram
6.2.3 recipe
same taste ratio
6.2.4 mixture
same color
check (an answer) batch
6.2.5 equivalent ratios
6.2.6 double number line diagram
tick marks
representation diagram
6.2.7 per
6.2.8 unit price
how much for 1
at this rate double number line
6.2.9 meters per second
constant speed
6.2.10 same rate equivalent ratios
6.2.11 table
6.2.14 calculation per
6.2.15 tape diagram
Unit 3: Unit Rates and Percentages
In the previous unit, students began to develop an understanding of ratios and rates. They started to describe situations using terms such as "ratio," "rate," "equivalent ratios," "per," "constant speed," and "constant rate" (MP6). They understood specific instances of the idea that \(a : b\) is equivalent to every other ratio of the form \(sa : sb\), where \(s\) is a positive number. They learned that "at this rate" or "at the same rate" signals a situation that is characterized by equivalent ratios. Although the usefulness of ratios of the form \(\frac{a}{b} : 1\) and \(1 : \frac{b}{a}\) was highlighted, the term "unit rate" was not introduced.
In this unit, students find the two values \(\frac{a}{b}\) and \(\frac{b}{a}\) that are associated with the ratio \(a : b\), and interpret them as rates per 1. For example, if a person walks 13 meters in 10 seconds at a constant rate, that means they walked at a speed of \(\frac{13}{10}\) meters per 1 second and a pace of \(\frac{10}{13}\) seconds per 1 meter.
Students learn that one of the two values (\(\frac{a}{b}\) or \(\frac{b}{a}\)) may be more useful than the other in reasoning about a given situation. They find and use rates per 1 to solve problems set in contexts (MP2), attending to units and specifying units in their answers. For example, given item amounts and their costs, which is the better deal? Or given distances and times, which object is moving faster? Measurement conversions provide other opportunities to use rates.
Students observe that if two ratios \(a : b\) and \(c : d\) are equivalent, then \(\frac{a}{b} = \frac{c}{d}\). The values \(\frac{a}{b}\) and \(\frac{c}{d}\) are called unit rates because they can be interpreted in the context from which they arose as rates per unit. Students note that in a table of equivalent ratios, the entries in one column are produced by multiplying a unit rate by the corresponding entries in the other column. Students learn that "percent" means "per 100" and indicates a rate. Just as a unit rate can be interpreted in context as a rate per 1, a percentage can be interpreted in the context from which it arose as a rate per 100. For example, suppose a beverage is made by mixing 1 cup of juice with 9 cups of water. The percentage of juice in 20 cups of the beverage is 2 cups and 10 percent of the beverage is juice. Interpreting the 10 as a rate: "there are 10 cups of juice per 100 cups of beverage" or, more generally, "there are 10 units of juice per 100 units of beverage." The percentage—and the rate—indicate equivalent ratios of juice to beverage, e.g., 2 cups to 20 cups and 10 cups to 100 cups.
In this unit, tables and double number line diagrams are intended to help students connect percentages with equivalent ratios, and reinforce an understanding of percentages as rates per 100. Students should internalize the meaning of important benchmark percentages, for example, they should connect "75% of a number" with "\(\frac{3}{4}\) times a number" and "0.75 times a number." Note that 75% ("seventy-five per hundred") does not represent a fraction or decimal (which are numbers), but that "75% of a number" is calculated as a fraction of or a decimal times the number.
Work done in grades 4 and 5 supports learning about the concept of a percentage. In grade 5, students understand why multiplying a given number by a fraction less than 1 results in a product that is less than the original number, and why multiplying a given number by a fraction greater than 1 results in a product that is greater than the original number. This understanding of multiplication as scaling comes into play as students interpret, for example,
35% of 2 cups of juice as \(\frac{35}{100} \boldcdot 2\) cups of juice.
250% of 2 cups of juice as \(\frac{250}{100} \boldcdot 2\) cups of juice.
In this unit, teachers can anticipate students using language for mathematical purposes such as interpreting, explaining, and justifying. Throughout the unit, students will benefit from routines designed to grow robust disciplinary language, both for their own sense-making and for building shared understanding with peers. Teachers can formatively assess how students are using language in these ways, particularly when students are using language to:
a context in which a identifying a unit rate is helpful (Lesson 1)
unit rates in different contexts (Lesson 6)
situations involving constant speed (Lesson 8)
tape diagrams used to represent percentages (Lesson 12)
situations involving measurement, rates, and cost (Lesson 17)
reasoning for estimating and sorting measurements (Lesson 2)
reasoning about relative sizes of units of measurement (Lesson 3)
how to make decisions using rates (Lesson 9)
reasoning about percentages (Lesson 11)
strategies for finding missing information involving percentages (Lesson 14)
reasoning about equivalent ratios and unit rates (Lesson 7)
reasoning about finding percentages (Lessons 15 and 16)
reasoning about costs and time (Lesson 17)
In addition, students have opportunities to generalize about unit ratios, unit rates, and percentages from multiple contexts and with reference to benchmark percentages, tape diagrams, and other mathematical representations. Students can also be expected to describe measurements and observations, describe and compare situations involving percentages, compare speeds, compare prices, and critique reasoning about costs and time.
6.3.1 at this rate
6.3.3 order
6.3.5 (good / better / best) deal
rate per 1 unit price
same speed
6.3.6 unit rate gallon
6.3.7 result unit rate
6.3.8 pace speed
(good / better / best) deal
6.3.10 percentage
___% of
6.3.11 tick marks
6.3.12 ___% as much tape diagram
6.3.14 ___% of
6.3.15 regular price
sale price percentage
Unit 4: Dividing Fractions
Work with fractions in grade 6 draws on earlier work in operations and algebraic thinking, particularly the knowledge of multiplicative situations developed in grades 3 to 5, and making use of the relationship between multiplication and division. Multiplicative situations include three types: equal groups; comparisons of two quantities; dimensions of arrays or rectangles. In the equal groups and comparison situations, there are two subtypes, sometimes called the partitive and the quotitive (or measurement) interpretations of division. Students are not expected to identify the three types of situations or use the terms "partitive" or "quotitive." However, they should recognize the associated interpretations of division in specific contexts (MP7).
For example, in an equal groups situation when the group size is unknown, division can be used to answer the question, "How many in each group?" If the number of groups is unknown, division answers the question, "How many groups?" For example, if 12 pounds of almonds are equally shared among several bags:
There are 2 bags. How many pounds in each bag? (partitive)
There are 6 pounds in each bag. How many bags? (quotitive)
In a comparison situation that involves division, the size of one object may be unknown or the relative sizes of two objects may be unknown. For example, when comparing two ropes:
A rope is 12 feet long. It is twice as long as another rope. How long is the second rope? (partitive)
One rope is 12 feet long. One rope is 6 feet long. How many times longer than the second rope is the first rope? (quotitive)
In situations that involve arrays or rectangles, division can be used to find an unknown factor. In an array situation, the unknown is the number of entries in a row or a column; in a rectangle, the unknown is a length or a width measurement. For example, "The area of a rectangle is 12 square feet. One side is 6 feet long. How long is the other side?" If the rectangle is viewed as tiled by an array of 12 unit squares with 6 tiles in each row, this question can seen as asking for the number of entries in each column.
At beginning of the unit, students consider how the relative sizes of numerator and denominator affect the size of their quotient. Students first compute quotients of whole numbers, then—without computing—consider the relative magnitudes of quotients that include divisors which are whole numbers, fractions, or decimals, e.g., "Is \(800 \div \frac{1}{10}\) larger than or smaller than \(800 \div 2.5\)?"
The second section of the unit focuses on equal groups and comparison situations. It begins with partitive and quotitive situations that involve whole numbers, represented by tape diagrams and equations. Students interpret the numbers in the two situations (MP2) and consider analogous situations that involve one or more fractions, again accompanied by tape diagrams and equations. Students learn to interpret, represent, and describe these situations, using terminology such as "What fraction of 6 is 2?," "How many 3s are in 12?," "How many fourths are in 3?," "is one-third as long as," "is two-thirds as much as," and "is one-and-one-half times the size of."
The third section concerns computing quotients of fractions. Students build on their work from the previous section by considering quotients related to products of numbers and unit fractions, e.g., "How many 3s in 12?" and "What is \(\frac {1}{3}\) of 12?," to establish that dividing by a unit fraction \(\frac{1}{b}\) is the same as multiplying by its reciprocal \(b\). Building on this and their understanding that \(\frac{a}{b} = a \boldcdot \frac{1}{b}\) (from grade 4), students understand that dividing by a fraction \(\frac{a}{b}\) is the same as multiplying by its reciprocal \(\frac{b}{a}\).
The fourth section returns to interpretations of division in situations that involve fractions. This time, the focus is on using division to find an unknown area or volume measurement. In grade 3, students connected areas of rectangles with multiplication, viewing a rectangle as tiled by an array of unit squares and understanding that, for whole-number side lengths, multiplying the side lengths yields the number of unit squares that tile the rectangle. In grade 5, students extended the formula for the area of rectangles with whole-number side lengths to rectangles with fractional side lengths. For example, they viewed a \(\frac 23\)-by-\(\frac 57\) rectangle as tiled by 10 \(\frac 13\)-by-\(\frac 17\) rectangles, reasoning that 21 such rectangles compose 1 square unit, so the area of one such rectangle is \(\frac {1}{21}\), thus the area of a shape composed of 10 such rectangles is \(\frac {10}{21}\). In a previous grade 6 unit, students used their familiarity with this formula to develop formulas for areas of triangles and parallelograms. In this unit, they return to this formula, using their understanding of it to extend the formula for the volume of a right rectangular prism (developed in grade 5) to right rectangular prisms with fractional side lengths.
The unit ends with two lessons in which students use what they have learned about working with fractions (including the volume formula) to solve problems set in real-world contexts, including a multi-step problem about calculating shipping costs. These require students to formulate appropriate equations that use the four operations or draw diagrams, and to interpret results of calculations in the contexts from which they arose (MP2).
In this unit, teachers can anticipate students using language for mathematical purposes such as interpreting, representing, justifying, and explaining. Throughout the unit, students will benefit from routines designed to grow robust disciplinary language, both for their own sense-making and for building shared understanding with peers. Teachers can formatively assess how students are using language in these ways, particularly when students are using language to:
Interpret and represent
situations involving division (Lessons 2, 3, 12, and 16)
situations involving measurement constraints (Lesson 17)
reasoning about division and diagrams (Lessons 4 and 5)
strategies for dividing numbers (Lesson 11)
reasoning about volume (Lesson 15)
how to create and make sense of division diagrams (Lesson 6)
how to represent division situations (Lesson 9)
how to find missing lengths (Lesson 14)
a plan for optimizing costs (Lesson 17)
In addition, students are expected to critique the reasoning of others about division situations and representations, and make generalizations about division by comparing and connecting across division situations, and across the representations used in reasoning about these situations. The Lesson Syntheses in Lessons 2 and 12 offer specific disciplinary language that may be especially helpful for supporting students in navigating the language of important ideas in this unit.
6.4.1 divisor
dividend quotient
6.4.2 equation
interpretation How many groups of ___?
How many ___ in each group?
6.4.3 unknown
equal-sized
6.4.4 whole
6.4.5 relationship
6.4.6 equal-sized
6.4.7 times as ___
fraction of ___
6.4.8 container unknown
6.4.10 reciprocal
observations times as ___
numerator
6.4.11 evaluate
6.4.13 gaps
6.4.14 packed
6.4.17 assumption packed
Unit 5: Arithmetic in Base Ten
By the end of grade 5, students learn to use efficient algorithms to fluently calculate sums, differences, and products of multi-digit whole numbers. They calculate quotients of multi-digit whole numbers with up to four-digit dividends and two-digit divisors. These calculations use strategies based on place value, the properties of operations, and the relationship between multiplication and division. Grade 5 students illustrate and explain these calculations with equations, rectangular arrays, and area diagrams.
In grade 5, students also calculate sums, differences, products, and quotients of decimals to hundredths, using concrete representations or drawings, and strategies based on place value, properties of operations, and the relationship between addition and subtraction. They connect their strategies to written methods and explain their reasoning.
In this unit, students learn an efficient algorithm for division and extend their use of other base-ten algorithms to decimals of arbitrary length. Because these algorithms rely on the structure of the base-ten system, students build on the understanding of place value and the properties of operations developed during earlier grades (MP7).
The unit begins with a lesson that revisits sums and differences of decimals to hundredths, and products of a decimal and whole number. The tasks are set in the context of shopping and budgeting, allowing students to be reminded of appropriate magnitudes for results of calculations with decimals.
The next section focuses on extending algorithms for addition, subtraction, and multiplication, which students used with whole numbers in earlier grades, to decimals of arbitrary length.
Students begin by using "base-ten diagrams," diagrams analogous to base-ten blocks for ones, tens, and hundreds. These diagrams show, for example, ones as large squares, tenths as rectangles, hundredths as medium squares, thousandths as small rectangles, and ten-thousandths as small squares. These are designed so that the area of a figure that represents a base-ten unit is one tenth of the area of the figure that represents the base-ten unit of next highest value. Thus, a group of 10 figures that represent 10 like base-ten units can be replaced by a figure whose area is the sum of the areas of the 10 figures.
Students first calculate sums of two decimals by representing each number as a base-ten diagram, combining representations of like base-ten units and replacing representations of 10 like units by a representation of the unit of next highest value, e.g., 10 rectangles compose 1 large square. Next, they examine "vertical calculations," representations of calculations with symbols that show one summand above the other, with the sum written below. They check each vertical calculation by representing it with base-ten diagrams. This is followed by a similar lesson on subtraction of decimals. The section concludes with a lesson designed to illustrate efficient algorithms and their advantages, and to promote their use.
The third section, multiplication of decimals, begins by asking students to estimate products of a whole number and a decimal, allowing students to be reminded of appropriate magnitudes for results of calculations with decimals. In this section, students extend their use of efficient algorithms for multiplication from whole numbers to decimals. They begin by writing products of decimals as products of fractions, calculating the product of the fractions, then writing the product as a decimal. They discuss the effect of multiplying by powers of 0.1, noting that multiplying by 0.1 has the same effect as dividing by 10. Students use area diagrams to represent products of decimals. The efficient multiplication algorithms are introduced and students use them, initially supported by area diagrams.
In the fourth section, students learn long division. They begin with quotients of whole numbers, first representing these quotients with base-ten diagrams, then proceeding to efficient algorithms, initially supporting their use with base-ten diagrams. Students then tackle quotients of whole numbers that result in decimals, quotients of decimals and whole numbers, and finally quotients of decimals.
The unit ends with two lessons in which students use calculations with decimals to solve problems set in real-world contexts. These require students to interpret diagrams, and to interpret results of calculations in the contexts from which they arose (MP2). The second lesson draws on work with geometry and ratios from previous units. Students fold papers of different sizes to make origami boxes of different dimensions, then compare the lengths, widths, heights, and surface areas of the boxes.
In this unit, teachers can anticipate students using language for mathematical purposes such as explaining, interpreting and comparing. Throughout the unit, students will benefit from routines designed to grow robust disciplinary language, both for their own sense-making and for building shared understanding with peers. Teachers can formatively assess how students are using language in these ways, particularly when students are using language to:
processes of estimating and finding costs (Lesson 1)
approaches to adding and subtracting decimals (Lesson 4)
reasoning about products and quotients involving powers of 10 (Lesson 5)
methods for multiplying decimals (Lesson 8)
reasoning about relationships among measurements (Lesson 15)
representations of decimals (Lesson 2)
base ten diagrams showing addition/subtraction of decimals (Lesson 3)
area diagrams showing products of decimals (Lesson 7)
base ten diagrams and long division when the quotient is a decimal value (Lesson 11)
base ten diagrams with numerical calculations (Lesson 4)
base ten diagrams showing quotients with partial quotient method (Lesson 9)
previously studied methods for finding quotients with long division (Lesson 10)
In addition, students are expected to describe decimal values up to hundredths, generalize about multiplication by powers of 10 and about decimal measurements, critique approaches to operations on decimals, and justify strategies for finding quotients with reference to base-ten diagrams and more efficient algorithms.
6.5.1 digits
6.5.2 base-ten diagram
vertical calculation place value
6.5.3 unbundle
6.5.4 method
6.5.5 powers of 10 product
decimal point
6.5.7 partial products method
6.5.9 partial quotients remainder
6.5.10 long division divisor
6.5.13 long division
6.5.14 precision
6.5.15 operation
Unit 6: Expressions and Equations
Students begin the unit by working with linear equations that have single occurrences of one variable, e.g., \(x + 1 = 5\) and \(4x = 2\). They represent relationships with tape diagrams and with linear equations, explaining correspondences between these representations. They examine values that make a given linear equation true or false, and what it means for a number to be a solution to an equation. Solving equations of the form \(px = q\) where \(p\) and \(q\) are rational numbers can produce complex fractions (i.e., quotients of fractions), so students extend their understanding of fractions to include those with numerators and denominators that are not whole numbers.
The second section introduces balanced and unbalanced "hanger diagrams" as a way to reason about solving the linear equations of the first section. Students write linear equations to represent situations, including situations with percentages, solve the equations, and interpret the solutions in the original contexts (MP2), specifying units of measurement when appropriate (MP6). They represent linear expressions with tape diagrams and use the diagrams to identify values of variables for which two linear expressions are equal. Students write linear expressions such as \(6w - 24\) and \(6(w - 4)\) and represent them with area diagrams, noting the connection with the distributive property (MP7). They use the distributive property to write equivalent expressions.
In the third section of the unit, students write expressions with whole-number exponents and whole-number, fraction, or variable bases. They evaluate such expressions, using properties of exponents strategically (MP5). They understand that a solution to an equation in one variable is a number that makes the equation true when the number is substituted for all instances of the variable. They represent algebraic expressions and equations in order to solve problems. They determine whether pairs of numerical exponential expressions are equivalent and explain their reasoning (MP3). By examining a list of values, they find solutions for simple exponential equations of the form \(a = b^x\), e.g., \(2^x = 32\), and simple quadratic and cubic equations, e.g., \(64 = x^3.\)
In the last section of the unit, students represent collections of equivalent ratios as equations. They use and make connections between tables, graphs, and linear equations that represent the same relationships (MP1).
In this unit, teachers can anticipate students using language for mathematical purposes such as interpreting, describing and explaining. Throughout the unit, students will benefit from routines designed to grow robust disciplinary language, both for their own sense-making and for building shared understanding with peers. Teachers can formatively assess how students are using language in these ways, particularly when students are using language to:
tape diagrams involving letters that stand for numbers (Lesson 1)
the parts of an equation (Lesson 2)
descriptions of situations (Lesson 6)
numerical expressions involving exponents (Lesson 13)
different representations of the same relationship between quantities (Lesson 17)
how parts of an equation represent parts of a story (Lesson 2)
solutions to equations (Lesson 2)
stories represented by given equations (Lesson 5)
patterns of growth that can be represented using exponents (Lesson 12)
relationships between independent and dependent variables (Lesson 16)
the meaning of a solution using hanger diagrams (Lesson 3)
how to solve an equation (Lesson 4)
how to use equations to solve percent problems (Lesson 7)
how to determine whether two expressions are equivalent, including with reference to diagrams (Lesson 8)
strategies for determining whether expressions are equivalent (Lesson 13)
the process of evaluating variable exponential expressions (Lesson 15)
In addition, students are expected to compare equations with balanced hanger diagrams and with descriptions of situations, represent quantities with mathematical expressions, generalize about equivalent numerical expressions using rectangle diagrams and the distributive property, justify claims about equivalent variable expressions using rectangle diagrams and the distributive property, and justify reasoning when evaluating and comparing numerical expressions with exponents.
6.6.1 value (of a variable) operation
6.6.2 variable
solution to an equation
true equation / false equation value (of a variable)
6.6.3 each side
balanced hanger
6.6.4 solve (an equation) each side
6.6.7 true equation / false equation
6.6.8 equivalent expressions
6.6.9 term
distributive property
area as a product
area as a sum
6.6.12 to the power
6.6.13 base (of an exponent) to the power
6.6.14 solution to an equation
6.6.16 independent variable
dependent variable variable
6.6.17 coordinate plane
6.6.18 horizontal axis
vertical axis
Unit 7: Rational Numbers
In this unit, students are introduced to signed numbers and plot points in all four quadrants of the coordinate plane for the first time. They work with simple inequalities in one variable and learn to understand and use "common factor," "greatest common factor," "common multiple," and "least common multiple."
The first section of the unit introduces signed numbers. Students begin by considering examples of positive and negative temperatures, plotting each temperature on a vertical number line on which 0 is the only label. Next, they consider examples of positive and negative numbers used to denote height relative to sea level. In the second lesson, they plot positive and negative numbers on horizontal number lines, including "opposites"—pairs of numbers that are the same distance from zero. They use "less than," "greater than," and the corresponding symbols to describe the relationship of two signed numbers, noticing correspondences between the relative positions of two numbers on the number line and statements that use these symbols, e.g., \(0.8 > \text- 1.3\) means that 0.8 is to the right of -1.3 on the number line. Students learn that the sign of a number indicates whether the number is positive or negative, and that zero has no sign. They learn that the absolute value of a number is its distance from zero, how to use absolute value notation, and that opposites have the same absolute value because they have the same distance from zero.
Previously, when students worked only with non-negative numbers, magnitude and order were indistinguishable: if one number was greater than another, then on the number line it was always to the right of the other number and always farther from zero. In comparing two signed numbers, students distinguish between magnitude (the absolute value of a number) and order (relative position on the number line), distinguishing between "greater than" and "greater absolute value," and "less than" and "smaller absolute value."
Students examine opposites of numbers, noticing that the opposite of a negative number is positive.
The second section of the unit concerns inequalities. Students graph simple inequalities in one variable on the number line, using a circle or disk to indicate when a given point is, respectively, excluded or included. In these materials, inequality symbols in grade 6 are limited to < and > rather than \(\leq \) and \(\geq.\) However, in this unit students encounter situations when they need to represent statements such as \(2 < x\) or \(2 = x.\)
Students represent situations that involve inequalities, symbolically and with the number line, understanding that there may be infinitely many solutions for an inequality. They interpret and graph solutions in contexts (MP2), understanding that some results do not make sense in some contexts, and thus the graph of a solution might be different from the graph of the related symbolic inequality. For example, the graph describing the situation "A fishing boat can hold fewer than 9 people" omits values other than the whole numbers from 0 to 8, but the graph of \(x < 8\) includes all numbers less than 8. Students encounter situations that require more than one inequality statement to describe, e.g., "It rained for more than 10 minutes but less than 30 minutes" (\(t > 10\) and \(t < 30\), where \(t\) is the amount of time that it rained in minutes) but which can be described by one number line graph.
The third section of the unit focuses on the coordinate plane. In grade 5, students learned to plot points in the coordinate plane, but they worked only with non-negative numbers, thus plotted points only in the first quadrant. In a previous unit, students again worked in the first quadrant of the coordinate plane, plotting points to represent ratio and other relationships between two quantities with positive values. In this unit, students work in all four quadrants of the coordinate plane, plotting pairs of signed number coordinates in the plane. They understand that for a given data set, there are more and less strategic choices for the scale and extent of a set of axes. They understand the correspondence between the signs of a pair of coordinates and the quadrant of the corresponding point. They interpret the meanings of plotted points in given contexts (MP2), and use coordinates to calculate horizontal and vertical distances between two points.
The last section of the unit returns to consideration of whole numbers. In the first lesson, students are introduced to "common factor" and "greatest common factor," and solve problems that illustrate how the greatest common factor of two numbers can be used in real-world situations, e.g., determining the largest rectangular tile with whole-number dimensions that can tile a given rectangle with whole-number dimensions. The second lesson introduces "common multiple" and "least common multiple," and students solve problems that involve listing common multiples or identifying common multiples of two or more numbers. In the third and last lesson, students solve problems that revisit situations similar to those in the first two lessons and identify which of the new concepts is involved in each problem. This lesson includes two optional classroom activities.
In this unit, teachers can anticipate students using language for mathematical purposes such as describing, interpreting, justifying, and generalizing. Throughout the unit, students will benefit from routines designed to grow robust disciplinary language, both for their own sense-making and for building shared understanding with peers. Teachers can formatively assess how students are using language in these ways, particularly when students are using language to:
Describe and interpret
situations involving negative numbers (Lesson 1)
features of a number line (Lessons 2, 4 and 6)
situations involving elevation (Lesson 7)
situations involving minimums and maximums (Lesson 8)
points on a coordinate plane (Lessons 11 and 14)
situations involving factors and multiples (Lesson 18)
reasoning about magnitude (Lesson 3)
reasoning about a situation involving negative numbers (Lesson 5)
reasoning about solutions to inequalities (Lesson 9)
that all possible pairs of factors have been identified (Lesson 16)
Generalize
the meaning of integers for a specific context (Lesson 5)
understandings of solutions to inequalities (Lesson 9)
about the relationships between shapes (Lesson 10)
about greatest common factors (Lesson 16)
about least common multiples (Lesson 17)
In addition, students are expected to critique the reasoning of others, represent inequalities symbolically and in words, and explain how to order rational numbers and how to determine distances on the coordinate plane. Students also have opportunities to use language to compare magnitudes of positive and negative numbers, compare features of ordered pairs, and compare appropriate axes for different sets of coordinates.
6.7.1 positive number
negative number
degrees Celsius
sea level number line
below zero
6.7.2 opposite (numbers)
rational number
distance (away) from zero
6.7.3 sign
closer to 0
farther from 0 greater than
6.7.4 from least to greatest temperature
6.7.5 positive change
negative change
6.7.6 absolute value positive number
6.7.7 closer to 0
farther from 0
6.7.8 maximum
6.7.9 requirement
solution to an inequality
6.7.10 unbalanced hanger inequality
6.7.11 quadrant
\(x\)-coordinate
\(y\)-coordinate
6.7.12 (line) segment axis
6.7.13 degrees Fahrenheit degrees Celsius
6.7.14 absolute value
6.7.16 common factor
greatest common factor (GCF) factor
6.7.17 common multiple
least common multiple (LCM) multiple
Unit 8: Data Sets and Distributions
In this unit, students learn about populations and study variables associated with a population. They understand and use the terms "numerical data," "categorical data," "survey" (as noun and verb), "statistical question," "variability," "distribution," and "frequency." They make and interpret histograms, bar graphs, tables of frequencies, and box plots. They describe distributions (shown on graphical displays) using terms such as "symmetrical," "peaks," "gaps," and "clusters." They work with measures of center—understanding and using the terms "mean," "average," and "median." They work with measures of variability—understanding and using the terms "range," "mean absolute deviation" or MAD, "quartile," and "interquartile range" or IQR. They interpret measurements of center and variability in contexts.
In this unit, teachers can anticipate students using language for mathematical purposes such as justifying, representing, and interpreting. Throughout the unit, students will benefit from routines designed to grow robust disciplinary language, both for their own sense-making and for building shared understanding with peers. Teachers can formatively assess how students are using language in these ways, particularly when students are using language to:
reasoning for matching data sets to questions (Lesson 2)
reasoning about dot plots (Lesson 3)
reasoning about mean and median (Lesson 13)
reasoning about changes in mean and median (Lesson 14)
reasoning about which information is needed (Lesson 17)
which summaries and graphs best represent given data sets (Lesson 18)
data using dot plots (Lessons 3 and 4)
data using histograms (Lesson 7)
mean using bar graphs (Lesson 9)
data with five number summaries (Lesson 15)
data using box plots (Lesson 16)
dot plots (Lessons 4 and 11)
histograms (Lessons 6 and 18)
mean of a data set (Lesson 9)
five number summaries (Lesson 15)
box plots (Lesson 16)
In addition, students are expected to critique the reasoning of others, describe how quantities are measured, describe and compare features and distributions of data sets, generalize about means and distances in data sets, generalize categories for sorting data sets, and generalize about statistical questions. Students are also expected to use language to compare questions that produce numerical and categorical data, compare dot plots and histograms, and compare histograms and bar graphs.
6.8.1 numerical data
dot plot
6.8.2 statistical question
6.8.3 distribution
frequency bar graph
6.8.4 typical
6.8.5 center
spread variability
6.8.6 histogram
bins distribution
6.8.8 symmetrical
unusual value numerical data
6.8.9 average
6.8.10 measure of center
6.8.11 mean absolute deviation (MAD)
measure of spread symmetrical
6.8.13 median measure of center
6.8.14 peak
unusual value
6.8.15 range
interquartile range (IQR)
five-number summary measure of spread
6.8.16 box plot
whisker median
6.8.18 dot plot
box plot
Unit 9: Putting it All Together
This optional unit consists of six lessons. Each of the first three lessons is independent of the others, requiring only the mathematics of the previous units. The last three lessons build on each other.
The first lesson concerns Fermi problems—problems that require making rough estimates for quantities that are difficult or impossible to measure directly (MP4). The three problems in this lesson involve measurement conversion and calculation of volumes and surface areas of three-dimensional figures or the relationship of distance, rate, and time.
The second lesson involves finding approximately equivalent ratios for groups from two populations, one very large (the population of the world) and one comparatively small (a 30-student class). Students work with percent rates that describe subgroups of the world population, e.g., about 59% of the world population lives in Asia. Using these rates, which include numbers expressed in the form \(\frac a b\) or as decimals, they determine, for example, the number of students who would live in Asia—"if our class were the world" (MP2). Because students choose their own methods to determine these numbers, possibly making strategic use of benchmark percentages or spreadsheets (MP5), there is an opportunity for them to see correspondences between approaches (MP1). Because the size of the world population and its subgroups are estimates, and because pairs of values in ratios may both be whole numbers, considerations of accuracy may arise (MP6).
The third lesson is an exploration of the relationship between the greatest common factor of two numbers, continued fractions, and decomposition of rectangles with whole-number side lengths, providing students an opportunity to perceive this relationship through repeated reasoning (MP8) and to see correspondences between two kinds of numerical relationships, and between numerical and geometric relationships (MP1).
The remaining three lessons explore the mathematics of voting (MP2, MP4). In some activities, students chose how to assign votes and justify their choices (MP3). The first of these lessons focuses on proportions of voters and votes cast in elections in which there are two choices. It requires only the mathematics of the previous units, in particular, equivalent ratios, part–part ratios, percentages, unit rates, and, in the final activity, the concept of area. The second of these lessons focuses on methods for voting when there are more than two choices: plurality, runoff, and instant runoff. They compute percentages, finding that different voting methods have different outcomes. The third of these lessons focuses on representation in the case when voters have two choices. It's not always possible to have the same number of constituents per representative. How can we fairly share a small number of representatives? Students again compute percentages to find outcomes.
In this unit, teachers can anticipate students using language for mathematical purposes such as critiquing, justifying, and comparing. Throughout the unit, students will benefit from routines designed to grow robust disciplinary language, both for their own sense-making and for building shared understanding with peers. Teachers can formatively assess how students are using language in these ways, particularly when students are using language to:
reasoning about Fermi problems (Lesson 1)
claims about percentages (Lesson 4)
reasoning about the fairness of voting systems (Lesson 6)
reasoning about the fairness of voting systems (Lessons 5 and 6)
rectangles and fractions (Lesson 3)
voting systems (Lesson 5)
In addition, students are expected to interpret and represent characteristics of the world population, describe distributions of voters, and generalize about decomposition of area and numbers.
6.9.3 mixed number
6.9.4 in favor
6.9.5 plurality
runoff majority
6.9.6 in all
Scope and Sequence › | CommonCrawl |
\begin{document}
\title{Learning Aggregation Functions}
\begin{abstract}
Learning on sets is increasingly gaining attention in the machine
learning community, due to its widespread applicability. Typically,
representations over sets are computed by using fixed aggregation
functions such as sum or maximum. However, recent results showed
that universal function representation by sum- (or max-)
decomposition requires either highly discontinuous (and thus poorly
learnable) mappings, or a latent dimension equal to the maximum
number of elements in the set. To mitigate this problem, we
introduce a learnable aggregation function (LAF)
for sets of arbitrary cardinality. LAF can approximate
several extensively used aggregators (such as average, sum, maximum)
as well as more complex functions (e.g., variance and skewness). We
report experiments on semi-synthetic and real data showing that LAF
outperforms state-of-the-art sum- (max-) decomposition architectures
such as DeepSets and library-based architectures like Principal Neighborhood Aggregation, and can be effectively combined with attention-based architectures.
\end{abstract}
\section{Introduction}
The need to aggregate representations is ubiquitous in deep learning. Some recent examples include max-over-time pooling used in convolutional networks for sequence classification~\cite{DBLP:conf/emnlp/Kim14}, average pooling of neighbors in graph convolutional networks \cite{DBLP:conf/iclr/KipfW17}, max-pooling in Deep Sets~\cite{deepsets2017}, in (generalized) multi-instance learning~\cite{DBLP:journals/jmlr/TiboJF20} and in GraphSAGE~\cite{DBLP:conf/nips/HamiltonYL17}. In all the above cases (with the exception of LSTM-pooling in GraphSAGE) the aggregation function is predefined, i.e., not tunable, which may be in general a disadvantage~\cite{DBLP:conf/icml/IlseTW18}. Sum-based aggregation has been advocated based on theoretical findings showing the permutation invariant functions can be sum-decomposed~\cite{deepsets2017,xu2018how}. However, recent results~\cite{wagstaff2019limitations} showed that this universal function representation guarantee requires either highly discontinuous (and thus poorly learnable) mappings, or a latent dimension equal to the maximum number of elements in the set. This suggests that learning set functions that are accurate on sets of large cardinality is difficult.
Inspired by previous work on learning uninorms~\cite{MelnHull16}, we propose a new parametric family of aggregation functions that we call LAF, for {\em learnable aggregation functions}. A single LAF unit can approximate standard aggregators like sum, max or mean as well as model intermediate behaviours (possibly different in different areas of the space). In addition, LAF layers with multiple aggregation units can approximate higher order moments of distributions like variance, skewness or kurtosis. In contrast, other authors~\cite{corso2020principal} suggest to employ a predefined library of elementary aggregators to be combined. Since LAF can represent sums, it can be seen as a smooth version of the class of functions that are shown in~\cite{deepsets2017} to enjoy universality results in representing set functions. The hope is that being smoother, LAF is more easily learnable. Our empirical findings show that this can be actually the case, especially when asking the model to generalize over large sets. In particular, we find that: \begin{itemize} \item LAF layers can learn a wide range of aggregators (including higher-order moments) on sets of scalars without background knowledge on the nature of the aggregation task. \item LAF layers on the top of traditional layers can learn the same wide range of aggregators on sets of high dimensional vectors (MNIST images). \item LAF outperforms state-of-the-art set learning methods such as DeepSets and PNA on real-world problems involving point clouds and text concept set retrieval. \item LAF performs comparably to PNA on random graph generation tasks, outperforming several graph neural networks architectures including GAT~\cite{gat} and GIN~\cite{xu2018how}. \end{itemize}
\section{The LAF Framework} \label{sec:laf} \begin{table*}[t]
\centering
\begin{small}
\begin{sc}
\begin{tabular}{lc|cc|cc|cc|cc|cccc|l}
\toprule
Name & Definition & $a$ &$ b$ & $c$ &$ d$ & $e$ & $f$ & $g$ &$ h$ & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ & limits \\
\midrule
constant & $\kappa\in\mathbb{R}$ & 0 & 1 & - & - & 0 & 1 & - & - & $\kappa$ & 0 & 1 & 0 & \\
max & $\max_i x_i$ & $1/r$ & $r$ & - & - & 0 & 1 & - & - & 1 & 0 & 1 & 0 & $r\rightarrow\infty$\\
min& $\min_i x_i$ & 0 & 1 & $1/r$ & $r$ & 0 & 1 & - & - & 1 &-1 & 1 & 0 & $r\rightarrow\infty$\\
sum & $\sum_i x_i$ & 1 & 1 & - & - & 0 & 1 & - & - & 1 & 0 & 1 & 0 & \\
nonzero count& $|\{i: x_i\neq 0\}|$ & 1 & 0 & - & - & 0 & 1 & - & - & 1 & 0 & 1 & 0 & \\
mean & $1/N \sum_i x_i$ & 1 & 1 & - & - & 1 & 0 & - & - & 1 & 0 & 1 & 0 & \\
$k$th moment & $1/N \sum_i x_i^k$ & 1 & $k$ & - & - & 1 & 0 & - & - & 1 & 0 & 1 & 0 & \\
$l$th power of $k$th moment& $(1/N \sum_i x_i^k)^l$ & $l$ & $k$ & - & - & $l$ & 0 & - & - & 1 & 0 & 1 & 0 & \\
min/max & $\min_i x_i/\max_i x_i $ & 0 & 1 & $1/r$ & $r$ & $1/s$ & $s$ & - & - & 1 & 1 & 1 & 0 & $r,s\rightarrow\infty$\\
max/min & $\max_i x_i/\min_i x_i $ & $1/r$ & $r$ & - & - & 0 & 1 & $1/s$ & $s$ & 1 & 0 & 1 & 1& $r,s\rightarrow\infty$\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\caption{Different functions achievable by varying the parameters in the formulation in Equation~\ref{eq:foursigma}.}
\label{tab:foursigma_variants}
\end{table*} We use $\boldsymbol x=\{x_1,\ldots,x_N\}$ to denote finite multisets of real numbers $x_i\in \mathbb{R}$. Note that directly taking $\boldsymbol x$ to be a multiset, not a vector, means that there is no need to define properties like exchangeability or permutation equivariance for operations on $\boldsymbol x$. An aggregation function $\emph{agg}$ is any function that returns for any multiset $\boldsymbol x$ of arbitrary cardinality $N\in\mathbb{N}$ a value $\emph{agg}(\boldsymbol x)\in\mathbb{R}$.
Standard aggregation functions like \emph{mean} and \emph{max} can be understood as (normalized) $L_p$-norms. We therefore build our parametric LAF aggregator around functions of a form that generalizes $L_p$-norms: \begin{equation}
\label{eq:genLp}
L_{a,b}(\boldsymbol x):=\left( \sum_i x_i^b\right)^a\hspace{5mm} (a,b \geq 0). \end{equation}
$L_{a,b}$ is invariant under the addition of zeros: $L_{a,b}(\boldsymbol x) = L_{a,b}(\boldsymbol x\cup \mathbf{0})$ where $\mathbf{0}$ is a multiset of zeros of arbitrary cardinality. In order to also enable aggregations that can represent \emph{conjunctive} behaviour such as \emph{min}, we make symmetric use of aggregators of the multisets $\mathbf{1}-\boldsymbol x := \{1-x_i|x_i\in\boldsymbol x\}$. For $L_{a,b}(\mathbf{1}-\boldsymbol x)$ to be a well-behaved, dual version of $ L_{a,b}(\boldsymbol x)$, the values in $\boldsymbol x$ need to lie in the range $[0,1]$. We therefore restrict the following definition of our \emph{learnable aggregation function} to sets $\boldsymbol x$ whose elements are in $[0,1]$: \begin{equation}
\label{eq:foursigma}
{\rm LAF}(\boldsymbol x):= \frac{\alpha L_{a,b}(\boldsymbol x) + \beta L_{c,d}(\mathbf{1}-\boldsymbol x)}
{\gamma L_{e,f}(\boldsymbol x) +\delta L_{g,h}(\mathbf{1}-\boldsymbol x) } \end{equation} defined by tunable parameters $a,\ldots,h\geq 0$, and $\alpha,\ldots,\delta \in\mathbb{R}$. In cases where sets need to be aggregated whose elements are not already bounded by $0,1$, we apply a sigmoid function to the set elements prior to aggregation.
Table~\ref{tab:foursigma_variants} shows how a number of important aggregation functions are special cases of {\rm LAF}{} (for values in $[0,1]$). We make repeated use of the fact that $L_{0,1}$ returns the constant 1. For max and min {\rm LAF}{} only provides an asymptotic approximation in the limit of specific function parameters (as indicated in the last column of Table~\ref{tab:foursigma_variants}).
In most cases, the parameterization of {\rm LAF}{}
for the functions in Table~\ref{tab:foursigma_variants} will not be unique.
Being able to encode the powers of moments implies that e.g. the variance of $\boldsymbol x$ can be expressed as the difference $1/N \sum_i x_i^2 - (1/N \sum_i x_i)^2$ of two {\rm LAF}{} aggregators.
Since {\rm LAF}{} includes sum-aggregation, we can adapt the results of~\cite{deepsets2017} and~\cite{wagstaff2019limitations} on the theoretical universality of sum-aggregation as follows.
\begin{proposition} \label{prop:universal}
Let ${\cal X}\subset \mathds{R}$ be countable, and $f$ a function defined on finite multisets with
elements from ${\cal X}$. Then there exist functions $\phi: {\cal X}\rightarrow [0,1]$,
$\rho: \mathds{R} \rightarrow \mathds{R}$, and a parameterization of {\rm LAF}{}, such that
$f(\boldsymbol x)= \rho (LAF(\phi\boldsymbol x);\alpha,\beta,\gamma,\delta,a,b,c,d)$, where
$\phi\boldsymbol x$ is the multiset $\{\phi(x)|x\in\boldsymbol x\}$. \end{proposition}
A proof in~\cite{wagstaff2019limitations} for a very similar proposition used a mapping from ${\cal X}$ into the reals. Our requirement that {\rm LAF}{} inputs must be in $[0,1]$ requires a modification of the proof (contained in the supplementary material\footnote{See \url{https://github.com/alessandro-t/laf} for supplementary material and code.}), which for the definition of $\phi$ relies on a randomized construction. Proposition~\ref{prop:universal} shows that we retain the theoretical universality guarantees of~\cite{deepsets2017}, while enabling a wider range of solutions based on continuous encoding and decoding functions.
\begin{figure*}
\caption{Left: End-to-end LAF architecture. Right: {\rm LAF}{} functions with randomly generated parameters.}
\label{fig:randlafs}
\end{figure*}
LAF enables a continuum of intermediate and hybrid aggregators. In Figure~\ref{fig:randlafs} we plot 4 different randomly generated {\rm LAF}{} functions over $[0,1]\times [0,1]$, i.e., evaluated over sets of size 2. Parameters $\alpha,\ldots,\gamma$ were randomly sampled in $[0,1]$, $b,d,f,h$ in $\{0,\ldots,5\}$, and $a,c,e,g$ obtained as $1/i$ with $i$ a random integer from $\{0,\ldots,5\}$. The figure illustrates the rich repertoire of aggregation functions with different qualitative behaviours already for non-extreme parameter values.
Learning the functions depicted in Table~\ref{tab:foursigma_variants} can in principle be done by a single {\rm LAF}{} unit. However, learning complex aggregation functions might require a larger number of independent units, in that the final aggregation is the result of the combination of simpler aggregations. Moreover, a {\rm LAF}{} layer should be able to approximate the behaviour of simpler functions also when multiple units are used. Therefore, we analyzed the application of multiple {\rm LAF}{} units to some of the known functions in Table~\ref{tab:foursigma_variants}. The details and the visual representation of this analysis is shown in the supplementary material. Although using only one function is sometimes sufficient to greatly approximate the target function, the error variance among different runs is quite high, indicating that the optimization sometimes fails to converge to a good set of parameters. Multiple units provide more stability while performing better than a single unit aggregation in some cases. We therefore construct the {\rm LAF}{} architecture for the experimental section by using multiple aggregators, computing the final aggregation as a linear combination of the units aggregations. Several {\rm LAF}{} units can be combined as shown in Figure~\ref{fig:randlafs}, to capture different aspects of the input set, which can be in general a multiset of vectors $\boldsymbol x=\{x_1, \dots, x_N\}$, where now $x_i\in\mathbb{R}^d$. Note that multiple aggregators are also used in related frameworks such as DeepSets~\cite{deepsets2017} or Graph Neural Networks~\cite{gat,corso2020principal}. A module with $r$ {\rm LAF}{} units takes as input $d$-dimensional vectors and produces a vector of size $r \times d$ as output. Each LAF unit performs an \textit{element-wise} aggregation of the vectors in the set such that $L_{k,j}=\mathrm{LAF}(\{x_{i,j},\ldots,x_{N,j}\};\alpha_k,\beta_k,\gamma_k,\delta_k,a_k,b_k,c_k,d_k)$ for $k=1,\ldots,r$ and $j=1,\ldots,d$. The output vector can be then fed into the next layer.
\section{Related Work} \label{sec:rel}
Several studies address the problem of aggregating data over sets. Sum-decomposition strategies have been used in~\cite{deepsets2017} for points cloud classification and set expansion tasks and in \cite{santoro2017simple} for question answering and dynamic physical systems computation. Max, sum and average are standard aggregation functions for node neighborhoods in graph neural networks~\cite{DBLP:conf/nips/HamiltonYL17,DBLP:conf/iclr/KipfW17,xu2018how,gat}. \cite{deepsets2017} first proved universal representation results for these standard aggregators when combined with learned mappings over inputs and results of the aggregation. However, \cite{wagstaff2019limitations} showed that these universality results are of little practical use, as they either require highly discontinuous mappings that would be extremely difficult to learn, or a latent dimension that is at least the size of the maximum number of input elements.
\textit{Uninorms}~\cite{yager1996uninorm} are a class of aggregation functions in fuzzy logic that can behave in a \textit{conjunctive}, \textit{disjunctive} or \textit{averaging} manner depending on a parameter called \textit{neutral element}. \cite{MelnHull16} proposed to learn fuzzy aggregators by adjusting these learnable parameters, showing promising results on combining reviewers scores on papers into an overall decision of acceptance or reject. Despite the advantage of incorporating different behaviours in one single function, uninorms present discontinuities in the regions between aggregators, making them not amenable to be utilized in fully differentiable frameworks. Furthermore the range of possible behaviours is restricted to those commonly used in the context of fuzzy-logic.
The need for considering multiple candidate aggregators is advocated in a very recent work that was developed in parallel with our framework~\cite{corso2020principal}. The resulting architecture, termed \textit{Principal Neighborhood Aggregation} (PNA) combines multiple standard aggregators, including most of the ones we consider in the LAF framework, adjusting their outputs with degree scalers. However, the underlying philosophy is rather different. PNA aims at learning to select the appropriate aggregator(s) from a pool of candidates, while LAF explores a continuous space of aggregators that includes standard ones as extreme cases. Our experimental evaluation shows that PNA has troubles in learning aggregators that generalize over set sizes, despite having them in the pool of candidates, likely because of the quasi-combinatorial structure of its search space. On the other hand, LAF can successfully learn even the higher moment aggregators and consistently outperforms PNA.
Closely connected, but somewhat complementary to aggregation operators are \emph{attention mechanisms}~\cite{DBLP:journals/corr/BahdanauCB14, DBLP:conf/nips/VaswaniSPUJGKP17}. They have been explored to manipulate set data in~\cite{DBLP:conf/icml/LeeLKKCT19} and in the context of multi-instance learning~\cite{DBLP:conf/icml/IlseTW18}. Attention operates at the level of set elements, and aims at a transformation (weighting) of their representations such as to optimize a subsequent weighted sum-aggregation. Although the objectives of attention-based frameworks and {\rm LAF}{} are different in principle, our method can be used inside attention frameworks as the aggregation layer of the learned representation. We discuss the combination of {\rm LAF}{} and attention in Section~\ref{sec:stlaf} showing the advantage of using {\rm LAF}{}.
\section{Experiments} \label{sec:exps} In this section, we present and discuss experimental results showing the potential of the {\rm LAF}{} framework on both synthetic and real-world tasks. Synthetic experiments are aimed at showing the ability of {\rm LAF}{} to learn a wide range of aggregators and its ability to generalize over set sizes (i.e., having test-set sets whose cardinality exceeds the cardinality of the training-set sets), something that alternative architectures based on predefined aggregators fail to achieve. We use DeepSets, PNA, and LSTM as representatives of these architectures. The LSTM architecture corresponds to a version of DeepSets where the aggregation function is replaced by a LSTM layer. Experiments on diverse tasks including point cloud classification, text concept set retrieval and graph properties prediction are aimed at showing the potential of the framework on real-world applications.
\subsection{Scalars Aggregation}\label{sec:exps:scalars}
\begin{figure*}
\caption{Test performances for the synthetic experiment with integer scalars on increasing test set size. The x axis represents the maximum test set cardinality, the y axis depicts the MAE. The dot, star, diamond and triangle symbols denote LAF, DeepSets, PNA, and LSTM respectively. Skewness: $1/N \sum_i ((x_i - \hat{\mu})/\hat{\sigma})^3$, Kurtosis: $1/N \sum_i ((x_i - \hat{\mu})/\hat{\sigma})^4$, where $\hat{\mu}$ and $\hat{\sigma}$ are the sample mean and standard deviation.}
\label{fig:int_all_small}
\end{figure*} This section shows the learning capacity of the {\rm LAF}{} framework to learn simple and complex aggregation functions where constituents of the sets are simple numerical values. In this setting we consider sets made of scalar integer values. The training set is constructed as follows: for each set, we initially sample its cardinality $K$ from a uniform distribution taking values in $\{2, \dots, M\}$, and then we uniformly sample $K$ integers in $\{0,\ldots,9\}$. For the training set we use $M=10$. We construct several test sets for different values of $M$ ($M=5,10,15,20,25,30,35,40,45,50$). This implies that models need to generalize to larger set sizes. Contrarily to the training set, each test set is constructed in order to diversify the target labels it contains, so as to avoid degenerate behaviours for large set sizes (e.g., maximum constantly equal to 9). Each synthetic dataset is composed of 100,000 sets for training, 20,000 set for validating and 100,000 for testing. The number of aggregation units is set as follows. The model contains nine {\rm LAF}{} (Equation~\ref{eq:foursigma}) units, whose parameters $\{a_k,\dots,h_k\}$, $k=1,\ldots,9$ are initialized according to a uniform sampling in $[0,1]$ as those parameters must be positive, whereas the coefficients $\{\alpha, \dots, \delta \}$ are initialized with a Gaussian distribution with zero mean and standard deviation of 0.01 to cover also negative values. The positivity constraint for parameters $\{a,b,\ldots,h\}$ is enforced by projection during the optimization process. The remaining parameters can take on negative values. DeepSets also uses nine units: three max units, three sum units, and three mean units and PNA uses seven units: mean, max, sum, standard deviation, variance, skewness and kurtosis. Preliminary experiments showed that expanding the set of aggregators for PNA with higher order moments only leads to worse performance. Each set of integers is fed into an embedding layer (followed by a sigmoid) before performing the aggregation function. DeepSets and PNA do need an embedding layer (otherwise they would have no parameters to be tuned). Although {\rm LAF}{} does not need an embedding layer, we used it in all models to make the comparison more uniform. The architecture details are reported in the supplementary material. We use the Mean Absolute Error (MAE) as a loss function to calculate the prediction error.
Figure \ref{fig:int_all_small} shows the trend of the MAE for the three methods for increasing test set sizes, for different types of target aggregators. As expected, DeepSets manages to learn the identity function and thus correctly models aggregators like sum, max and mean. Even if {\rm LAF}{} needs to adjust its parameters in order to properly aggregate the data, its performance are competitive with those of DeepSets. When moving to more complex aggregators like inverse count, median or moments of different orders, DeepSets fails to learn the latent representation. One the other hand, the performance of {\rm LAF}{} is very stable for growing set sizes. While having in principle at its disposal most of the target aggregators (including higher order moment) PNA badly overfits over the cardinality of sets in the training set in all cases (remember that the training set contains sets of cardinality at most 10). The reason why {\rm LAF}{} substantially outperforms PNA on large set sizes could be explained in terms of a greater flexibility to adapt to the learnt representation. Indeed, {\rm LAF}{} parameters can adjust the \textit{laf} function to be compliant with the latent representation even if the input mapping fails to learn the identity. On the other hand, having a bunch of fixed, hard-coded aggregators, PNA needs to be able to both learn the identity mapping and select the correct aggregator among the candidates. Finally, LSTM results are generally poor when compared to the other methods, particularly in the case of the count and the sum.
\subsection{MNIST Digits}\label{sec:exps:mnist} We performed an additional set of experiments aiming to demonstrate the ability of {\rm LAF}{} to learn from more complex representations of the data by plugging it into end-to-end differentiable architectures. In these experiments, we thus replaced numbers by visual representations obtained from MNIST digits. Unlike the model proposed in the previous section, here we use three dense layers for learning picture representations before performing the aggregation function. Results obtained in this way are very similar to those obtained with numerical inputs and due to space limitations we report them along with other architectural and experimental details in the supplementary material.
\subsection{Point Clouds}\label{sec:exps:point:cloud} In order to evaluate {\rm LAF}{} on real-world dataset, we consider point cloud classification, a prototype task for set-wise prediction. Therefore, we run experimental comparisons on the ModelNet40~\cite{wu20153d} dataset, which consists of 9,843 training and 2,468 test point clouds of objects distributed over 40 classes. The dataset is preprocessed following the same procedure described by~\cite{deepsets2017}. We create point clouds of 100 and 1,000 three-dimensional points by adopting the point-cloud library's sampling routine developed by~\cite{rusu20113d} and normalizing each set of points to have zero mean (along each axis) and unit (global) variance. We refer with P100 and P1000 to the two datasets. For all the settings, we consider the same architecture and hyper-parameters of the DeepSets permutation invariant model described by~\cite{deepsets2017}. For {\rm LAF}{}, we replace the original aggregation function (max) used in DeepSets with 10 {\rm LAF}{} units, while for PNA we use the concatenation of max, min, mean, and standard deviation, as proposed by the authors. For PNA we do not consider any scaler, as the cardinalities of the sets are fixed. \begin{table}[t]
\centering
\begin{small}
\begin{sc}
\begin{tabular}{lcc}
\toprule
Method & P100 & P1000 \\
\midrule
DeepSets & 82.0$\pm$2.0\% & \textbf{87.0$\pm$1.0}\% \\
PNA & 82.9$\pm$0.7\% & 86.4$\pm$0.6\% \\
LSTM & 78.7$\pm$1.1\% & 82.2$\pm$1.7\% \\
LAF & \textbf{84.0$\pm$0.6}\% & \textbf{87.0$\pm$0.5}\% \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\caption{Results on the Point Clouds classification task. Accuracies with standard deviations (over 5 runs) for the ModelNet40 dataset.}
\label{tab:point:cloud}
\end{table}
\begin{table*}[bp!]
\centering
\begin{small}
\begin{sc}
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{l|ccccc|ccccc|ccccc}
\toprule
\multirow{3}[1]{*}{Method} & \multicolumn{5}{c|}{LDA-1$k$ (Vocab = $17k$)}
& \multicolumn{5}{c|}{LDA-3$k$ (Vocab = $38k$)} & \multicolumn{5}{c}{LDA-5$k$ (Vocab = $61k$)} \\
& \multicolumn{3}{c}{\textbf{Recall(\%)}} & \multirow{2}[1]{*}{\textbf{MRR}} & \multirow{2}[1]{*}{\textbf{Med.}}
& \multicolumn{3}{c}{\textbf{Recall(\%)}} & \multirow{2}[1]{*}{\textbf{MRR}} & \multirow{2}[1]{*}{\textbf{Med.}}
& \multicolumn{3}{c}{\textbf{Recall(\%)}} & \multirow{2}[1]{*}{\textbf{MRR}} & \multirow{2}[1]{*}{\textbf{Med.}}\\
& \textbf{@10} & \textbf{@100} & \textbf{@1k} & &
& \textbf{@10} & \textbf{@100} & \textbf{@1k} & &
& \textbf{@10} & \textbf{@100} & \textbf{@1k} & & \\
\midrule
Random & 0.06 & 0.6 & 5.9 & 0.001 & 8520 & 0.02 & 0.2 & 2.6 & 0.000 & 28635 & 0.01 & 0.2 & 1.6 & 0.000 & 30600 \\
Bayes Set & 1.69 & 11.9 & 37.2 & 0.007 & 2848 & 2.01 & 14.5 & 36.5 & 0.008 & 3234 & 1.75 & 12.5 & 34.5 & 0.007 & 3590 \\
w2v Near & 6.00 & \textbf{28.1} & 54.7 & 0.021 & 641 & 4.80 & 21.2 & 43.2 & 0.016 & 2054 & 4.03 & 16.7 & 35.2 & 0.013 & 6900 \\
NN-max & 4.78 & 22.5 & 53.1 & 0.023 & 779 & 5.30 & 24.9 & 54.8 & 0.025 & 672 & 4.72 & 21.4 & 47.0 & 0.022 & 1320 \\
NN-sum-con & 4.58 & 19.8 & 48.5 & 0.021 & 1110 & 5.81 & 27.2 & 60.0 & 0.027 & 453 & 4.87 & 23.5 & 53.9 & 0.022 & 731 \\
NN-max-con & 3.36 & 16.9 & 46.6 & 0.018 & 1250 & 5.61 & 25.7 & 57.5 & 0.026 & 570 & 4.72 & 22.0 & 51.8 & 0.022 & 877 \\
DeepSets & 5.53 & 24.2 & 54.3 & 0.025 & 696 & 6.04 & 28.5 & 60.7 & 0.027 & 426 & 5.54 & 26.1 & 55.5 & 0.026 & 616 \\
\midrule
DeepSets$^*$ & 5.89 & 26.0 & \textbf{55.3} & 0.026 & \textbf{619} & 7.56 & 28.5 & \textbf{64.0} & 0.035 & 349 & 6.49 & 27.9 & \textbf{56.9} & 0.030 & 536 \\
PNA & 5.56 & 24.7 & 53.2 & 0.027 & 753 & 7.04 & 27.2 & 58.7 & 0.028 & 502 & 5.47 & 23.8 & 52.4 & 0.025 & 807 \\
LSTM & 4.29 & 21.5 & 52.6 & 0.022 & 690 & 5.56 & 25.7 & 58.8 & 0.026 & 830 & 4.87 & 23.8 & 55.0 & 0.022 & 672 \\
LAF & \textbf{6.51} & 26.6 & 54.5 & \textbf{0.030} & 650 & \textbf{8.14} & \textbf{32.3} & 62.8 & \textbf{0.037} & \textbf{339} & \textbf{6.71} & \textbf{28.3} & \textbf{56.9} & \textbf{0.031} & \textbf{523} \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\caption{Results on Text Concept Set Retrieval on LDA-1k, LDA-3k, and LDA-5k. Bold values denote the best performance for each metric.}
\label{tab:set:expansion}
\end{table*} Results in Table~\ref{tab:point:cloud} show that {\rm LAF}{} produces an advantage in the lower resolution dataset (i.e., on P100), while it obtains comparable (and slightly more stable) performances in the higher resolution one (i.e., on P1000). These results suggest that having predefined aggregators is not necessarily an optimal choice in real world cases, and that the flexibility of {\rm LAF}{} in modeling diverse aggregation functions can boost performance and stability.
\subsection{Set Expansion} Following the experimental setup of DeepSets, we also considered the \textit{Set Expansion} task. In this task the aim is to augment a set of objects of the same class with other similar objects, as explained in~\cite{deepsets2017}. The model learns to predict a score for an object given a query set and decide whether to add the object to the existing set. Specifically, \cite{deepsets2017} consider the specific application of set expansion to text concept retrieval. The idea is to retrieve words that belong to a particular concept, giving as input set a set of words having the same concept. We employ the same model and hyper-parameters of the original publication, where we replace the sum-decomposition aggregation with LAF units for our methods and the min, max, mean, and standard deviation aggregators for PNA.
We trained our model on sets constructed from a vocabulary of different size, namely \textit{LDA-1K}, \textit{LDA-3K} and \textit{LDA-5K}. Table~\ref{tab:set:expansion} shows the results of {\rm LAF}{}, DeepSets and PNA on different evaluation metrics. We report the retrieval metrics recall@K, median rank and mean reciprocal rank. We also report the results on other methods the authors compared to in the original paper. More details on the other methods in the table can be found in the original publication. Briefly, \textit{Random} samples a word uniformly from the vocabulary; \textit{Bayes Set}~\cite{ghahramani2006bayesian}; \textit{w2v-Near} computes the nearest neighbors in the word2vec~\cite{mikolov2013distributed} space; \textit{NN-max} uses a similar architecture as our DeepSets but uses max pooling to compute the set feature, as opposed to sum pooling; \textit{NN-max-con} uses max pooling on set elements but concatenates this pooled representation with that of query for a final set feature; \textit{NN-sum-con} is similar to NN-max-con but uses sum pooling followed by concatenation with query representation. For the sake of fairness, we have rerun DeepSets using the current implementation from the authors (indicated as DeepSet$^{*}$ in Table~\ref{tab:set:expansion}), exhibiting better results than the ones reported in the original paper. Nonetheless, {\rm LAF}{} outperforms all other methods in most cases, especially on \textit{LDA-3K} and \textit{LDA-5K}.
\subsection{Multi-task Graph Properties} \cite{corso2020principal} defines a benchmark consisting of 6 classical graph theory tasks on artificially generated graphs from a wide range of popular graph types like Erdos-Renyi, Barabasi-Albert or star-shaped graphs. Three of the tasks are defined for nodes, while the other three for whole graphs. The node tasks are the single-source shortest-path lengths (N1), the eccentricity (N2) and the Laplacian features (N3). The graph tasks are graph connectivity (G1), diameter (G2), and the spectral radius (G3). We compare {\rm LAF}{} against PNA by simply replacing the original PNA aggregators and scalers with 100 {\rm LAF}{} units (see Equation $\ref{eq:foursigma}$). Table~\ref{tab:pna:benchmark} shows that albeit these datasets were designed to highlight the features of the PNA architecture, that outperforms a wide range of alternative graph neural network approaches {\rm LAF}{} produces competitive results, outperforming state-of-the-art GNN approaches like GIN~\cite{xu2018how}, GCN~\cite{DBLP:conf/iclr/KipfW17} and GAT~\cite{gat} and even improving over PNA on spectral radius prediction. PNA$^*$ is the variant without scalers of PNA still proposed by \cite{corso2020principal}. \begin{table}[t!]
\setlength\tabcolsep{4pt}
\centering
\begin{small}
\begin{sc}
\begin{tabular}{lcccccc}
\toprule
Method & N1 & N2 & N3 & G1 & G2 & G3 \\
\midrule
Baseline & -1.87 & -1.50 & -1.60 & -0.62 & -1.30 & -1.41 \\
GIN & -2.00 & -1.90 & -1.60 & -1.61 & -2.17 & -2.66 \\
GCN & -2.16 & -1.89 & -1.60 & -1.69 & -2.14 & -2.79 \\
GAT & -2.34 & -2.09 & -1.60 & -2.44 & -2.40 & -2.70 \\
MPNN (max) & -2.33 & -2.26 & -2.37 & -1.82 & -2.69 & -3.52 \\
MPNN (sum) & -2.36 & -2.16 & -2.59 & -2.54 & -2.67 & -2.87 \\
PNA$^*$ & -2.54 & -2.42 & -2.94 & \textbf{-2.61} & -2.82 & -3.29 \\
PNA & \textbf{-2.89} & \textbf{-2.89} & \textbf{-3.77} & \textbf{-2.61} & \textbf{-3.04} & -3.57 \\
\midrule
LAF & -2.13 & -2.20 & -1.67 & -2.35 & -2.77 & \textbf{-3.63} \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\caption{Results on the Multi-task graph properties prediction benchmark, expressed in $\log10$ of mean squared error.}
\label{tab:pna:benchmark}
\end{table}
\section{SetTransformer With LAF Aggregation} \label{sec:stlaf} In this section we investigate the combination of {\rm LAF}{} aggregation with attention mechanisms on sets as proposed in the SetTransformer framework~\cite{DBLP:conf/icml/LeeLKKCT19}. Briefly, SetTransformer consists of an \textit{encoder} and a \textit{decoder}. The encoder maps a set of input vectors into a set of feature vectors by leveraging attention blocks. The decoder employs a \textit{pooling multihead attention} (PMA) layer, which aggregates the set of feature vectors produced by the encoder. In the following experiment we replace PMA by a LAF layer. Inspired by one of the tasks described in \cite{DBLP:conf/icml/LeeLKKCT19}, we propose here to approximate the average of the unique numbers in a set of MNIST images. Solving the task requires to learn a cascade of two processing steps, one that detects unique elements in a set (which can be done by the transformer encoder, as shown in the experiment by~\cite{DBLP:conf/icml/LeeLKKCT19}) and one that aggregates the results by averaging (which LAF is supposed to do well). The set cardinalities are uniformly sampled from $\{2,3,4,5\}$ and each set label is calculated as the average of the unique digits contained in the set. We trained two SetTransformer models: one with PMA (ST-PMA) and the other with {\rm LAF}{} (ST-LAF). The full implementation details are reported in the supplementary material. Quantitative and qualitative results of the evaluation are shown in Figure~\ref{fig:st_pma_laf}, where we report the MAE for both methods\footnote{We run several experiments by changing the number of seeds $k$ of PMA. All of them exhibited the same behaviour. For this experiment we used $k=1$.}. ST-{\rm LAF}{} exhibits a nice improvement over ST-PMA for this particular task. Note that for ST-PMA only $25\%$ of the sets (red points in the scatter plot), corresponding to sets with maximum cardinality, approximates well the average, while for all other cardinalities (the remaining 75\% of the sets) ST-PMA predicts a constant value equal to the average label in the training set. ST-LAF instead clearly captures the distribution of the labels, generalizing better with respect to the set sizes. A similar behaviour was observed when learning to predict the sum rather than the average of the unique digits in a set (see supplementary material for the results).
\begin{figure}
\caption{Distribution of the predicted values for ST-PMA and ST-LAF by set cardinalities. On the x-axis the true labels of the sets, on the y-axis the predicted ones. Different colors represent the sets' cardinalities $|\boldsymbol x|$.}
\label{fig:st_pma_laf}
\end{figure}
\section{Conclusions} The theoretical underpinnings for sum aggregation as a universal framework for defining set functions do not necessarily provide a template for practical solutions. Therefore we introduced a framework for learning aggregation functions that makes use of a parametric aggregator to effectively explore a rich space of possible aggregations. {\rm LAF}{} defines a new class of aggregation functions, which include as special cases widely used aggregators, and also has the ability to learn complex functions such as higher-order moments. We empirically showed the generalization ability of our method on synthetic settings as well as real-world datasets, providing comparisons with state-of-the-art sum-decomposition approaches and recently introduced techniques. The flexibility of our model is a crucial aspect for potential practical use in many deep learning architectures, due to its ability to be easily plugged into larger architectures, as shown in our experiments with the SetTransformer. The portability of {\rm LAF}{} opens a new range of possible applications for aggregation functions in machine learning methods, and future research in this direction can enhance the expressivity of many architectures and models that deal with unstructured data. \label{sec:conc}
\input{ijcai21.bbl}
\appendix
\section{Proof of Proposition 1}
Let ${\cal X}=\{x_0,x_1,\ldots\}$. For $i\geq 0$ let $r_i$ be a random number sampled uniformly from the interval $[0,1]$. Define $\phi(x_i):=r_i$. Let $\boldsymbol x=\{ a_i:x_i | i\in J\}, \boldsymbol x'=\{ a'_h:x_h | h\in J'\}$ be two finite multisets with elements from ${\cal X}$, where $J,J'$ are finite index sets, and $a_i,a_h'$ denote the multiplicity with which elements $x_i,x_h$ appear in $\boldsymbol x$, respectively $\boldsymbol x'$. Now assume that $\boldsymbol x\neq\boldsymbol x'$, but \begin{equation} \tag{A.1} \label{eq:proof1}
\sum_{i\in J} a_i \phi(x_i) = \sum_{h\in J'} a_h' \phi(x_h), \end{equation} i.e., \begin{equation} \tag{A.2} \label{eq:proof2}
\sum_{j\in J\cup J'} (a_j-a'_j)r_j=0, \end{equation} where now $a_j$, respectively $a_j'$ is defined as 0 if $j\in J'\setminus J$, respectively $j\in J\setminus J'$. Since $\boldsymbol x\neq\boldsymbol x'$, the left side of this equation is not identical zero. Without loss of generality, we may actually assume that all coefficients $a_j-a'_j$ are nonzero. The event that the randomly sampled values
$\{r_j| j\in J\cup J'\}$ satisfy the linear constraint (\ref{eq:proof2}) has probability zero. Since the set of pairs of finite multisets over ${\cal X}$ is countable, also the probability that there exists any pair $\boldsymbol x\neq\boldsymbol x'$ for which (\ref{eq:proof1}) holds is zero. Thus, with probability one, the mapping from multisets $\boldsymbol x$ to their sum-aggregation $\sum_{x\in\boldsymbol x}\phi(x)$ is injective. In particular, there exists a set of fixed values $r_0,r_1,\ldots$, such that the (deterministic) mapping $x_i\mapsto r_i$ has the desired properties. The existence of the ``decoding'' function $\rho$ is now guaranteed as in the proofs of~\cite{deepsets2017,wagstaff2019limitations}.
Clearly, due to the randomized construction, the theorem and its proof have limited implications in practice. This however, already is true for previous results along these lines, where at least for the decoding function $\rho$, not much more than pure existence could be demonstrated.
\section{Learning} \begin{figure*}
\caption{ Trend of the MAE obtained with an increasing number of LAF units for most of the functions reported in Table 1. The error distribution is obtained performing 500 runs with different random parameter initializations. A linear layer is stacked on top of the LAF layer with more than 1 unit. The y axis is plot in logaritmic scale.}
\label{fig:box_plots}
\end{figure*}
We study here the difficulty of solving the optimization problem when varying the number of LAF units, aiming to show that the use of multiple units helps finding a better solution. We formulate as learning tasks some of the target functions described in Table 1. Additionally, we inspect the parameters of the learned model. We construct a simple architecture similar to the aggregation layer presented in Section 4, in which the aggregation is performed using one or more LAF units and, in the case of multiple aggregators, their outputs are combined together using a linear layer.
We also discard any non-linear activation function prior to the aggregation because the input sets are composed of real numbers in the range $[0,1]$, with a maximum of 10 elements for each set. We consider 1,3,6,9,12,15,18 and 21 LAF units in this setting. For each function and for each number of units we performed 500 random restarts. The results are shown in Figure~\ref{fig:box_plots}, where we report the MAE distributions. Let's initially consider the cases when a single unit performs the aggregation. Note first that the functions listed in Table 1 can be parametrized in an infinite number of alternative ways.
For instance, consider the \textit{sum} function. A possible solution is obtained if $L_{a,b}$ learns the \textit{sum}, $L_{e,f}=1$ and $\alpha=\gamma$. If instead $L_{a,b}=\textit{sum}$ and $L_{e,f}=L_{g,h}=1$, it is sufficient that $\gamma+\delta=\alpha$ to still obtain the sum. This is indeed what we found when inspecting the best performing models among the various restarts, as shown in the following: $$ sum: \frac{1.75(\sum x^{1.00})^{1.00}\ + \ 0.00(\sum (1-x)^{0.00})^{0.56}}{0.91(\sum x^{0.24})^{0.00}\ + \ 0.84(\sum (1-x)^{0.36})^{0.00}} $$ $$ count: \frac{1.01(\sum x^{0.00})^{0.99}\ + \ 0.94(\sum (1-x)^{0.00})^{1.01}}{1.08(\sum x^{0.47})^{0.00}\ + \ 0.88(\sum (1-x)^{1.02})^{0.00}} $$ $$ mean: \frac{1.51(\sum x^{1.00})^{1.00}\ + \ 0.00(\sum (1-x)^{0.62})^{0.00}}{0.00(\sum x^{0.30})^{0.00}\ + \ 1.51(\sum (1-x)^{0.00})^{1.00}} $$ A detailed overview of the parameters' values learned using one {\rm LAF}{} unit is depicted in Table~\ref{tab:foursigma_params}. For each function in Figure~\ref{fig:box_plots}, we report the values of the random restart that obtained the lowest error.
The evaluation clearly shows that learning a function with just one LAF unit is not trivial. In some cases LAF was able to almost perfectly match the target function, but to be reasonably confident to learn a good representation many random restarts are needed, since the variance among different runs is quite large. The error variance reduces when more than one LAF unit is adopted, drastically dropping when six units are used in parallel, still maintaining a reasonable average error. Jointly learning multiple LAF units and combining their outputs can lead to two possible behaviours giving rise to an accurate approximation of the underlying function: in the first case, it is possible that one ``lucky'' unit learns a parametrization close to the target function, leaving the linear layer after the aggregation to learn to choose that unit or to rescale its output. In the second case the target function representation is ``distributed'' among the different units, here the linear layer is responsible to obtain the function by combining the LAF aggregation outputs. In the following we show another example of a learnt model, for a setting with three LAF units. Here the target function is the \textit{count}. $$ unit1: \frac{0.81(\sum x^{0.87})^{0.37}\ + \ 0.80(\sum (1-x)^{0.74})^{0.72}}{1.19(\sum x^{0.19})^{0.72}\ + \ 1.18(\sum (1-x)^{0.00})^{0.62}}\\ $$ $$ unit2: \frac{1.43(\sum x^{0.00})^{1.10}\ + \ 1.31(\sum (1-x)^{0.01})^{0.74}}{0.64(\sum x^{0.85})^{0.00}\ + \ 0.62(\sum (1-x)^{0.46})^{0.00}}\\ $$ $$ unit3: \frac{0.83(\sum x^{0.87})^{0.37}\ + \ 0.77(\sum (1-x)^{0.12})^{0.00}}{1.17(\sum x^{0.69})^{0.86}\ + \ 1.22(\sum (1-x)^{0.00})^{0.16}}\\ $$
\begin{align*}
&linear:& 0.02 + (-0.13*unit1)+\\
&&+(0.50*unit2)+(-0.07*unit3) \end{align*}
In this case, the second unit learns a function that counts twice the elements of the set. The output of this unit is then halved by the linear layer, which gives very little weights to the outputs of the other units.
\begin{table*}[h!] \begin{center} \begin{small} \begin{sc}
\begin{tabular}{l|cc|cc|cc|cc|cccc} \toprule Name & $a$ &$ b$ & $c$ &$ d$ & $e$ & $f$ & $g$ &$ h$ & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ \\ \midrule max & 0.28 & 4.74 & 0.00 & 0.57 & 0.33 & 1.74 & 0.00 & 0.48 & 1.68 & 0.00 & 0.90 & 0.75 \\ min & 0.28 & 0.28 & 0.27 & 1.13 & 0.30 & 0.35 & 0.87 & 3.69 & 0.51 & 0.00 & 0.45 & 1.91 \\ sum & 1.00 & 1.00 & 0.56 & 0.00 & 0.00 & 0.24 & 0.00 & 0.36 & 1.75 & 0.00 & 0.91 & 0.84 \\ count & 0.99 & 0.00 & 1.01 & 0.00 & 0.00 & 0.47 & 0.00 & 1.02 & 1.01 & 0.94 & 1.08 & 0.88 \\ mean & 1.00 & 1.00 & 0.00 & 0.62 & 0.00 & 0.30 & 1.00 & 0.00 & 1.51 & 0.00 & 0.00 & 1.51 \\ $k$th moment & 1.00 & 2.00 & 0.00 & 0.13 & 1.00 & 0.00 & 1.00 & 0.00 & 1.67 & 0.00 & 0.83 & 0.84 \\ $l$th power of $k$th moment & 2.87 & 2.15 & 0.00 & 0.91 & 2.94 & 0.00 & 1.71 & 0.00 & 1.65 & 0.01 & 1.44 & 0.24 \\ min/max & 0.06 & 0.00 & 1.52 & 2.36 & 0.18 & 4.40 & 0.64 & 7.25 & 0.23 & 0.10 & 0.27 & 2.26 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \tabtag{B.1} \caption{Parameters' values learned with one LAF unit.} \label{tab:foursigma_params} \end{table*}
\section{Details of Sections~4.1 - Experiments on Scalars}
We used mini-batches of 64 sets and trained the models for 100 epochs. We use Adam as parameter optimizer, setting the initial learning rate to $1e^{-3}$ and apply adaptive decay based on the validation loss.
Each element in the dataset is a set of scalars $\boldsymbol{x} = \{x_1,\ldots, x_N \}$, $x_i \in \mathbb{R}$.
Network architecture: \begin{align*}
\boldsymbol{x} & \rightarrow \textsc{Embedding(10,10)} \rightarrow \textsc{Sigmoid}\\ & \rightarrow \textsc{LAF(9)} \rightarrow \textsc{Dense(10 $\times$ 9, 1)} \end{align*}
\section{Details of Sections~4.2 - MNIST Digits} In this section, we modify the experimental setting in Section~4.1 for the integers scalars to process MNIST images of digits. The dataset is the same as in the experiment on scalars, but integers are replaced by randomly sampling MNIST images for the same digits. Instances for the training and test sets are drawn from the MNIST training and test sets, respectively. We used mini-batches of 64 sets and trained the models for 100 epochs. We use Adam as parameter optimizer, setting the initial learning rate to $1e^{-3}$ and apply adaptive decay based on the validation loss. Each element in the dataset is a set of vectors $\boldsymbol{x} = \{x_1,\ldots, x_N \}$, $x_i \in \mathbb{R}^{784}$. Network architecture: \begin{align*}
\boldsymbol{x} & \rightarrow \textsc{Dense(784,300)} \rightarrow \textsc{Tanh}\\ & \rightarrow \textsc{Dense(300,100)} \rightarrow \textsc{Tanh}\\ & \rightarrow \textsc{Dense(100,30)} \rightarrow \textsc{Sigmod}\\ & \rightarrow \textsc{LAF(9)} \rightarrow \textsc{Dense(30 $\times$ 9, 1000)}\rightarrow \textsc{Tanh}\\ & \rightarrow \textsc{Dense(1000,100)} \rightarrow \textsc{Tanh} \rightarrow \textsc{Dense(100,1)} \end{align*}
\begin{figure*}
\caption{Test performances for the synthetic experiment on MNIST digits on increasing test set size. The x axis of the figures represents the maximum test set cardinality, whereas the y axis depicts the MAE. The dot, star, diamond and traingle symbols denote LAF, DeepSets, PNA and LSTM respectively.}
\label{fig:int_all_big}
\end{figure*} \begin{figure*}
\caption{Scatter plots of the MNIST experiment comparing true (x axis) and predicted (y axis) values with 50 as maximum test set size. The target aggregations are \textit{max} (up-left), \textit{inverse count} (up-right), \textit{median} (bottom-left) and \textit{kurtosis} (bottom-right).}
\label{fig:scatter}
\end{figure*} Figure~\ref{fig:int_all_big} shows the comparison of LAF, DeepSets, PNA, and LSTM in this setting. Results are quite similar to those achieved in the scalar setting, indicating that LAF is capable of effectively backpropagating information so as to drive the learning of an appropriate latent representation, while DeepSets, PNA, and LSTM suffer from the same problems seen in aggregating scalars.
Furthermore, Figure~\ref{fig:scatter} provides a qualitative evaluation of the predictions of the LAF, DeepSets, and PNA methods on a representative subset of the target aggregators. The images illustrate the correlation between the true labels and the predictions. LAF predictions are distributed over the diagonal line, with no clear bias. On the other hand, DeepSets and PNA perform generally worse than LAF, exhibiting higher variances. In particular, for inverse count and kurtosis, DeepSets and PNA predictions are condensed in a specific area, suggesting an overfitting on the training set.
\section{Details of Sections~SetTransformer with LAF aggregation} \begin{figure}
\caption{Distribution of the predicted values for ST-PMA and ST-LAF by set cardinalities. On the x-axis the true labels of the sets, on the y-axis the predicted ones. Different colors represent the sets' cardinalities $|\boldsymbol x|$.}
\label{fig:st_pma_laf2}
\end{figure} We used mini-batches of 64 sets and trained the models for 1,000 epochs. We use Adam as parameter optimizer, setting the initial learning rate to $5e^{-4}$. Each element in the dataset is a set of vectors $\boldsymbol{x} = \{x_1,\ldots, x_N \}$, $x_i \in \mathbb{R}^{784}$. Network architecture: \begin{align*}
\boldsymbol{x} & \rightarrow \textsc{Dense(784,300)} \rightarrow \textsc{ReLU}\\ & \rightarrow \textsc{Dense(300,100)} \rightarrow \textsc{ReLU}\\ & \rightarrow \textsc{Dense(100,30)} \rightarrow \textsc{Sigmod}\\ & \rightarrow \textsc{SAB(64,4)} \rightarrow \textsc{SAB(64,4)}\\ & \rightarrow \textsc{PMA$_k$(64,4)} \ \textsc{or} \ \textsc{LAF(10)} \\ & \rightarrow \textsc{Dense(64 $\times$ $k$ or 9, 100)}\rightarrow \textsc{ReLU}\\ & \rightarrow \textsc{Dense(100,1)} \end{align*} Please refer to \cite{DBLP:conf/icml/LeeLKKCT19} for the SAB and PMA details. Figure~\ref{fig:st_pma_laf2} shows the comparison of ST-PMA and ST-LAF for unique sum of MNIST images.
\end{document} | arXiv |
Weakly symmetric space
In mathematics, a weakly symmetric space is a notion introduced by the Norwegian mathematician Atle Selberg in the 1950s as a generalisation of symmetric space, due to Élie Cartan. Geometrically the spaces are defined as complete Riemannian manifolds such that any two points can be exchanged by an isometry, the symmetric case being when the isometry is required to have period two. The classification of weakly symmetric spaces relies on that of periodic automorphisms of complex semisimple Lie algebras. They provide examples of Gelfand pairs, although the corresponding theory of spherical functions in harmonic analysis, known for symmetric spaces, has not yet been developed.
References
• Akhiezer, D. N.; Vinberg, E. B. (1999), "Weakly symmetric spaces and spherical varieties", Transf. Groups, 4: 3–24, doi:10.1007/BF01236659, S2CID 124032062
• Helgason, Sigurdur (1978), Differential geometry, Lie groups and symmetric spaces, Academic Press, ISBN 0-12-338460-5
• Kac, V. G. (1990), Infinite dimensional Lie algebras (3rd ed.), Cambridge University Press, ISBN 0-521-46693-8
• Kobayashi, Toshiyuki (2002). "Branching problems of unitary representations". Proceedings of the International Congress of Mathematicians, Vol. II. Beijing: Higher Ed. Press. pp. 615–627.
• Kobayashi, Toshiyuki (2004), "Geometry of multiplicity-free representations of GL(n), visible actions on flag varieties, and triunity", Acta Appl. Math., 81: 129–146, doi:10.1023/B:ACAP.0000024198.46928.0c, S2CID 14530010
• Kobayashi, Toshiyuki (2007), "A generalized Cartan decomposition for the double coset space (U(n1)×U(n2)×U(n3))\U(n)/(U(p)×U(q))", J. Math. Soc. Jpn., 59: 669–691
• Krämer, Manfred (1979), "Sphärische Untergruppen in kompakten zusammenhängenden Liegruppen", Compositio Mathematica (in German), 38: 129–153
• Matsuki, Toshihiko (1991), "Orbits on flag manifolds", Proceedings of the International Congress of Mathematicians, Vol. II, 1990 Kyoto, Math. Soc. Japan, pp. 807–813
• Matsuki, Toshihiko (2013), "An example of orthogonal triple flag variety of finite type", J. Algebra, 375: 148–187, doi:10.1016/j.jalgebra.2012.11.012, S2CID 119132477
• Mikityuk, I. V. (1987), "On the integrability of invariant Hamiltonian systems with homogeneous configuration spaces", Math. USSR Sbornik, 57 (2): 527–546, Bibcode:1987SbMat..57..527M, doi:10.1070/SM1987v057n02ABEH003084
• Selberg, A. (1956), "Harmonic analysis and discontinuous groups in weakly symmetric riemannian spaces, with applications to Dirichlet series", J. Indian Math. Society, 20: 47–87
• Stembridge, J. R. (2001), "Multiplicity-free products of Schur functions", Annals of Combinatorics, 5 (2): 113–121, doi:10.1007/s00026-001-8008-6, hdl:2027.42/41839, S2CID 18105235
• Stembridge, J. R. (2003), "Multiplicity-free products and restrictions of Weyl characters", Representation Theory, 7 (18): 404–439, doi:10.1090/S1088-4165-03-00150-X
• Vinberg, É. B. (2001), "Commutative homogeneous spaces and co-isotropic symplectic actions", Russian Math. Surveys, 56 (1): 1–60, Bibcode:2001RuMaS..56....1V, doi:10.1070/RM2001v056n01ABEH000356, S2CID 250919435
• Wolf, J. A.; Gray, A. (1968), "Homogeneous spaces defined by Lie group automorphisms. I, II", Journal of Differential Geometry, 2: 77–114, 115–159
• Wolf, J. A. (2007), Harmonic Analysis on Commutative Spaces, American Mathematical Society, ISBN 978-0-8218-4289-8
• Ziller, Wolfgang (1996), "Weakly symmetric spaces", Topics in geometry, Progr. Nonlinear Differential Equations Appl., vol. 20, Boston: Birkhäuser, pp. 355–368
| Wikipedia |
For which values of $\alpha \in \mathbb R$ is the following system of linear equations solvable?
The problem I was given: Calculate the value of the following determinant:
$\left| \begin{array}{ccc} \alpha & 1 & \alpha^2 & -\alpha\\ 1 & \alpha & 1 & 1\\ 1 & \alpha^2 & 2\alpha & 2\alpha\\ 1 & 1 & \alpha^2 & -\alpha \end{array} \right|$
$\begin{array}{lcl} \alpha x_1 & + & x_2 & + & \alpha^2 x_3 & = & -\alpha\\ x_1 & + & \alpha x_2 & + & x_3 & = & 1\\ x_1 & + & \alpha^2 x_2 & + & 2\alpha x_3 & = & 2\alpha\\ x_1 & + & x_2 & + & \alpha^2 x_3 & = & -\alpha\\ \end{array}$
I got as far as finding the determinant, and then I got stuck.
So I solve the determinant like this:
$\left| \begin{array}{ccc} \alpha & 1 & \alpha^2 & -\alpha\\ 1 & \alpha & 1 & 1\\ 1 & \alpha^2 & 2\alpha & 2\alpha\\ 1 & 1 & \alpha^2 & -\alpha \end{array} \right|$ = $\left| \begin{array}{ccc} \alpha - 1 & 0 & 0 & 0\\ 1 & \alpha & 1 & 1\\ 1 & \alpha^2 & 2\alpha & 2\alpha\\ 1 & 1 & \alpha^2 & -\alpha \end{array} \right|$ = $(\alpha - 1)\left| \begin{array}{ccc} \alpha & 1 & 1\\ \alpha^2 & 2\alpha & 2\alpha \\ 1 & \alpha^2 & -\alpha \end{array} \right|$ =
$(\alpha - 1)\left| \begin{array}{ccc} \alpha & 1 & 0\\ \alpha^2 & 2\alpha & 0 \\ 1 & \alpha^2 & -\alpha - \alpha^2 \end{array} \right|$ = $-\alpha^3(\alpha - 1) (1 + \alpha)$
However, now I haven't got a clue on solving the system of linear equations... It's got to do with the fact that the equations look like the determinant I calculated before, but I don't know how to connect those two.
Thanks in advance for any help. (:
linear-algebra matrices
Cameron Buie
JeroenJeroen
$\begingroup$ If you want to use your determinant calculation, first note that if the determinant is not zero, there cannot be a solution. (Otherwise, the fourth column would be a linear combination of the first three.) That leaves only three possibilities for $\alpha$, and you can analyze those three systems of equations directly without much difficulty. $\endgroup$ – Michael Joyce Nov 4 '12 at 23:52
$\begingroup$ I added the (matrices) tag since this involves determinants, and solutions to systems of equations. $\endgroup$ – Cameron Buie Nov 5 '12 at 16:00
Let me first illustrate an alternate approach. You're looking at $$\left[\begin{array}{ccc} \alpha & 1 & \alpha^2\\ 1 & \alpha & 1\\ 1 & \alpha^2 & 2\alpha\\ 1 & 1 & \alpha^2 \end{array}\right]\left[\begin{array}{c} x_1\\ x_2\\ x_3\end{array}\right]=\left[\begin{array}{c} -\alpha\\ 1\\ 2\alpha\\ -\alpha\end{array}\right].$$ We can use row reduction on the augmented matrix $$\left[\begin{array}{ccc|c} \alpha & 1 & \alpha^2 & -\alpha\\ 1 & \alpha & 1 & 1\\ 1 & \alpha^2 & 2\alpha & 2\alpha\\ 1 & 1 & \alpha^2 & -\alpha \end{array}\right].$$ In particular, for the system to be solvable, it is necessary and sufficient that none of the rows in the reduced matrix is all $0$'s except for in the last column. Subtract the bottom row from the other rows, yielding $$\left[\begin{array}{ccc|c} \alpha-1 & 0 & 0 & 0\\ 0 & \alpha-1 & 1-\alpha^2 & 1+\alpha\\ 0 & \alpha^2-1 & 2\alpha-\alpha^2 & 3\alpha\\ 1 & 1 & \alpha^2 & -\alpha \end{array}\right].$$
It's clear then that if $\alpha=1$, the second row has all $0$s except in the last column, so $\alpha=1$ doesn't give us a solvable system. Suppose that $\alpha\neq 1$, multiply the top row by $\frac1{\alpha-1}$, and subtract the new top row from the bottom row, giving us $$\left[\begin{array}{ccc|c} 1 & 0 & 0 & 0\\ 0 & \alpha-1 & 1-\alpha^2 & 1+\alpha\\ 0 & \alpha^2-1 & 2\alpha-\alpha^2 & 3\alpha\\ 0 & 1 & \alpha^2 & -\alpha \end{array}\right].$$
Swap the second and fourth rows and add the new second row to the last two rows, giving us $$\left[\begin{array}{ccc|c} 1 & 0 & 0 & 0\\ 0 & 1 & \alpha^2 & -\alpha\\ 0 & \alpha^2 & 2\alpha & 2\alpha\\ 0 & \alpha & 1 & 1 \end{array}\right],$$ whence subtracting $\alpha$ times the fourth row from the third row gives us $$\left[\begin{array}{ccc|c} 1 & 0 & 0 & 0\\ 0 & 1 & \alpha^2 & -\alpha\\ 0 & 0 & \alpha & \alpha\\ 0 & \alpha & 1 & 1 \end{array}\right].$$
Note that $\alpha=0$ readily gives us the solution $x_1=x_2=0$, $x_3=1$. Assume that $\alpha\neq 0,$ multiply the third row by $\frac1\alpha$, subtract the new third row from the fourth row, and subtract $\alpha^2$ times the new third row from the second row, yielding $$\left[\begin{array}{ccc|c} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & -\alpha^2-\alpha\\ 0 & 0 & 1 & 1\\ 0 & \alpha & 0 & 0 \end{array}\right],$$ whence subtracting $\alpha$ times the second row from the fourth row yields $$\left[\begin{array}{ccc|c} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & -\alpha^2-\alpha\\ 0 & 0 & 1 & 1\\ 0 & 0 & 0 & \alpha^3+\alpha^2 \end{array}\right].$$ The bottom right entry has to be $0$, so since $\alpha\neq 0$ by assumption, we need $\alpha=-1$, giving us $$\left[\begin{array}{ccc|c} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 \end{array}\right].$$
Hence, the two values of $\alpha$ that give the system a solution are $\alpha=0$ and $\alpha=-1$, and in both cases, the system has solution $x_1=x_2=0$, $x_3=1$. (I think all my calculations are correct, but I'd recommend double-checking them.)
The major upside of the determinant approach is that it saves you time and effort, since you've already calculated it. If we assume that $\alpha$ is a constant that gives us a solution, then since we're dealing with $4$ equations in only $3$ variables, we have to have at least one of the rows in the reduced echelon form of the augmented matrix be all $0$s--we simply don't have enough degrees of freedom otherwise. The determinant of the reduced matrix will then be $0$, and since we obtain it by invertible row operations on the original matrix, then the determinant of the original matrix must also be $0$.
By your previous work, then, $-\alpha^3(\alpha-1)(1+\alpha)=0$, so the only possible values of $\alpha$ that can give us a solvable system are $\alpha=0$, $\alpha=-1$, and $\alpha=1$. We simply check the system in each case to see if it actually is solvable. If $\alpha=0$, we readily get $x_1=x_2=0$, $x_3=1$ as the unique solution; similarly for $\alpha=-1$. However, if we put $\alpha=1$, then the second equation becomes $$x_1+x_2+x_3=1,$$ but the fourth equation becomes $$x_1+x_2+x_3=-1,$$ so $\alpha=1$ does not give us a solvable system.
Cameron BuieCameron Buie
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices or ask your own question.
Solution of a system of linear equations
Find all solutions for a system of linear equations over a given field
System of Linear Equations - how many solutions?
Solution to the following system of linear equations?
Solving a system of linear equations that depends on parameters $\alpha, \beta \in \Bbb R$
How to find the value of an arbitrary constant for which a system of linear equations has no solution?
Is this a system of linear equations in five variables?
Solutions For System Of Equations
Given $A$, $b$, and $V=\{x\in \mathbb{R}^3:Ax=b\}$, find $\textit{all}$ values of $a,b,c$ for which the following statements are true
Find a vector $z$ so that system of linear equations $A^{T} \cdot y=d$ is solvable if and only if $d^{T} \cdot z=0$. | CommonCrawl |
Selected articles from the 5th Translational Bioinformatics Conference (TBC 2015): medical informatics and decision making
CLASH: Complementary Linkage with Anchoring and Scoring for Heterogeneous biomolecular and clinical data
Yonghyun Nam1,
Myungjun Kim1,
Kyungwon Lee2 &
Hyunjung Shin1
The study on disease-disease association has been increasingly viewed and analyzed as a network, in which the connections between diseases are configured using the source information on interactome maps of biomolecules such as genes, proteins, metabolites, etc. Although abundance in source information leads to tighter connections between diseases in the network, for a certain group of diseases, such as metabolic diseases, the connections do not occur much due to insufficient source information; a large proportion of their associated genes are still unknown. One way to circumvent the difficulties in the lack of source information is to integrate available external information by using one of up-to-date integration or fusion methods. However, if one wants a disease network placing huge emphasis on the original source of data but still utilizing external sources only to complement it, integration may not be pertinent. Interpretation on the integrated network would be ambiguous: meanings conferred on edges would be vague due to fused information.
In this study, we propose a network based algorithm that complements the original network by utilizing external information while preserving the network's originality. The proposed algorithm links the disconnected node to the disease network by using complementary information from external data source through four steps: anchoring, connecting, scoring, and stopping.
When applied to the network of metabolic diseases that is sourced from protein-protein interaction data, the proposed algorithm recovered connections by 97%, and improved the AUC performance up to 0.71 (lifted from 0.55) by using the external information outsourced from text mining results on PubMed comorbidity literatures. Experimental results also show that the proposed algorithm is robust to noisy external information.
This research has novelty in which the proposed algorithm preserves the network's originality, but at the same time, complements it by utilizing external information. Furthermore it can be utilized for original association recovery and novel association discovery for disease network.
The amount of information on disease-disease association has been ever increasing over the last decade and the source of information also has been diversified from multi-levels of genomic data to clinical data, such as copy number alteration at the genomic level, miRNA expression or DNA methylation at the epigenomic level, protein-protein interaction at the proteomic level, disease comorbidity at the clinical level, and etc. [1–4].
One of the most effective ways to describe disease-disease association is by constructing a disease network, which consists of nodes and edges, representing diseases and disease-disease relations, respectively [5, 6]. In a disease network, the concept of disease-disease association (i.e., edges) varies depending on the source of information that the network utilizes. Many researches have been conducted using various sources of data. In Goh et al. [7], the authors created a disease network based on gene-disease associations by connecting diseases that are associated with the same genes. It had further developed in Zhou et al. [8] which constructed a diseases network by using disease-gene information and disease-symptom information. Lee et al. [9] constructed a network in which two diseases are linked if mutated enzymes associated with them catalyze adjacent metabolic reactions. While these researches are based on genomic data, there are also other researches that utilize clinical data for associated disease concerning patient records. In Hidalgo et al. [10], authors constructed a disease network, which reflects information of two co-occurred diseases, by utilizing clinical records of 13,039,018 patients. The authors utilized prevalence of two diseases co-occurring in a patient for edges. On the other hand, Žitnik et al. [11] is a research that uses both genomic and clinical data. In Žitnik et al. [11], the authors integrated data on disease-gene association, disease ontology, drugs and genes so that they could utilized such information to deduce disease-disease associations. So far, we see that most of these researches only utilize a single source of data to find disease-disease associations. On the other hand, if diverse and heterogeneous sources of data are available, there also have been network-wise approaches to integrate multiple disease networks for inferring associations between diseases [3, 12–15].
However, if one wants a disease network placing huge emphasis on a particular source of data but still utilizing other sources only to complement the original source, which researches above can be applied to it? For example, if we were to target drug discovery or to reposition by using disease network, the one constructed with protein information would be more preferred [16, 17]. On the other hand, if physicians were to treat a patient, they would prefer a disease network constructed with comorbidity information based on prevalence of diseases. If, however, there are losses or deficiencies of information in each original source, what would we do? In such a case, disease-disease associations cannot be defined, resulting in a disconnected network. See Fig. 1(a). If external source of data is usable, we could integrate the original network and the external network in a network-wise fashion by using one of up-to002Ddate integration methods [3, 12, 14, 18]. But interpretation on the results would be ambiguous: meanings conferred on edges would be vague in the resulting disease network.
Proposed Method: a original network with disconnected nodes, and b complemented network that links the disconnected nodes to the connected network through newly found edges using external information
This motivates the present research. In this paper, we propose an algorithm that preserves the network's originality, but at the same time, complements it by utilizing external information. We denote the proposed algorithm as CLASH which abbreviates complementary linkage with anchoring and scoring for heterogeneous data. An original disease network is constructed from PPI information as in Goh et al. [7] and Zhou et al. [8]. And then, CLASH is applied to the network in order to link disconnected nodes to the network through newly found edges using external information. In the complementing process, clinical comorbidity information is used as external source of information. The resulting network is called as a complemented disease network. See Fig. 1(b).
The remainder of the paper is organized as follows. Section 2 introduces CLASH in length. Section 3 provides the experimental results on validity and utility of CLASH by applying it to metabolic disease group. Section 4 presents conclusion.
Complementary linkage with anchoring and scoring for heterogeneous data
Disease network is a graph, G = (V, W), that describes connection between diseases with nodes and edges. In a disease network, a node denotes a disease and an edge denotes disease-disease association. Here, disease-disease association is a value obtained by calculating similarity between two diseases based on their shared genes (or proteins) and co-occurrence information through clinical trials. On the graph, similarity between two diseases are assigned with a weight value on the edge and higher of its value implies higher association between two diseases. In our study, the disease network is constructed using shared proteins: a disease vector has n-dimensional protein vector, and the similarity between two diseases are calculated with cosine similarity between disease vectors. If all disease gets connected to more than one edge, the disease network becomes a connected graph. On the other hand, if a disease is left to be disconnected from the network due to lack of disease-disease association with other diseases, it becomes impossible to deduce any inference about the disease from the network.
To circumvent the difficulty, we propose an algorithm for linking the disconnected node to the disease network by using complementary information from external data source, CLASH. The method is composed of four steps, anchoring, connecting, scoring and stopping. Figure 2 presents each step, beginning with a graph of eight nodes of which five are connected and three are disconnected.
Schematic description of CLASH Algorithm
At the anchoring step, disconnected nodes are initially connected to the network (i.e., disconnected nodes drop their anchor to the connected graph). During the process, disconnected nodes must select a node to drop their anchor by utilizing possible external data source. Here, external data source is information unsuitable or less preferred, for purpose or usage of the proposed network. Thus, it is information that is not mainly used for constructing the network, but can be utilized to supplement the network. Figure 2(b) describes an anchoring step of a disconnected node, v 6, to the connected graph of five nodes. Based on external data source, the fact that v 6 is related to {v 1, v 2, v 3} allows us to initially connect v 6 to associated nodes. These associated nodes are defined as candidate nodes.
The scoring step allows disconnected nodes to select connectable nodes from anchored nodes through scoring. In this paper, we utilize the Semi-Supervised Learning (SSL) algorithm. The way it works is that given a connected graph, the SSL computes f-score for each labeled node. See Appendix. In the present study, the label of disconnected node is set to '1', and '0's for others. The f-score increases with stronger connectivity of associated edges and the number of edges [19–21]. In addition, the higher the f-score implies the higher similarity to the labeled node. Figure 2(c) shows the result of scoring step of a disconnected node, v 6, on candidate nodes, v 1, v 2, v 3. The f-scores for given nodes are {0.9, 0.8, 0.6}, respectively.
At the connecting step, disconnected nodes connect to the graph based on scoring results. The order of connection is determined based on the f-scores on candidate nodes. Higher the f-score means higher the priority in the order of connection. (If the f-scores of candidate nodes are the same, then they are connected to the graph with the same priority.) Newly formed edges through connection can cause disturbances (sometimes severe disturbances) on the network. Because severe disturbances could cause the original network to lose its property, there needs to be a standard that could determine the connection with certainty. In this research, we provide such standard due to its principle of preserving network properties and utilizing external data source. The preservation of network's property can be measured through performance of the network whenever a new edge is formed between a disconnected node and a candidate node. Performance of network is measured on validation nodes, which excludes disconnected nodes and candidate nodes. Under the condition that the network's performance stays within certain range (denoted by ϵ in (2) in Fig. 3), we then allow additional edges to be formed. If a change in network performance after connection is trivial, it implies that a newly connected node does not incur unexpected perturbation in the original network, thus preserving the original property of network. Figure 2(d) shows a candidate node, v 1, connects to v 6 due to its higher f-score compared to other candidate nodes {v 2, v 3}. At this point, the validation nodes are {v 4, v 5}. The connection is finalized since the change in additional/pre-post performance of the edge is within a certain range. After the first connection, we proceed to another candidate node, v 2, which has the second largest f-score. Figure 2(e) shows the disconnected node, v 6, making final connections to two of the candidate nodes, {v 1, v 2}, out of three candidate nodes that had been anchored.
Pseudo Code of CLASH Algorithm
The proposed algorithm stops when there are no more disconnected nodes, or external data or the performance of the network decreases. Figure 2(f) shows a network in which all the disconnected nodes, v 6, v 7, v 8, are connected through previous steps.
The pseudo-code for the proposed algorithm is presented in Fig. 3.
The proposed algorithm was applied to the metabolic disease group. Demographically, metabolic diseases are widespread among people and show increasing rate in recent years. In up-to-date genome researches or molecular biology, however, it is difficult to trace disease-protein associations for metabolic diseases. This means that in researches that construct diseases network based on genome or protein information, it is also difficult to trace disease-disease associations for metabolic diseases. For example, in Goh et al. [7], it shows that there are almost no connections between metabolic disease nodes in human disease network, which is significantly different from nodes for cancer that have dense connections in the network. Thus, we chose metabolic diseases to construct a denser disease network by supplementing connections through CLASH. To construct a metabolic disease network, a list of diseases was obtained from Medical Subject Headings (MeSH) of the National Library of Medicine [22]. When considering up to the second level of the taxonomy, there are 302 descriptors for metabolic diseases out of 27,149 listed diseases. For the nodes, we acquired 53,430 data points on disease-protein associations. From the obtained set of data, we have selected and utilized 181 metabolic diseases and 15,281 proteins that were eligible to construct the disease network. The edge weights were calculated with cosine similarity between 15,281 dimensional disease vectors. We denote this network as the original disease network. For external data sources that could be used to complement the original disease network, we used comorbidity information reported on clinical literatures. Comorbidity addresses the concomitant occurrence of different medical conditions or diseases, usually complex and often chronic, in the same patient [23, 24]. In order to acquire external data source, text mining was conducted on 1,000,254 medical literatures from PubMed. From this point onward, we define complemented disease network as the resulting disease network complemented with comorbidity information through CLASH. Table 1 summarizes the source and type of data used in our experiment.
Table 1 Data sources for metabolic diseases, proteins, disease-protein associations, comorbidity
Experimental settings
First, we have performed verification tests to see how the proposed algorithm, CLASH, complements the network. To carry out the tests, we gave artificial damages to the original network, allowing CLASH to recover the damaged network and to construct the complemented disease network. More specifically, we randomly chose and deleted 20, 40, 60 and 80 %, of the edges from the original diseases network and specified each resulting network as '%-damaged original network'. (For our convenience, '0 %-damaged original network' is denoted as the reference network.) After constructing the complemented disease networks from each levels of damage, we compared them to each of the %-damaged original networks. Second, the overall performances would increase if we add further information from extra data. However, this would only happen if the extra source of data is useful to complement the original source of data. Therefore, to further clarify the validity of CLASH, we performed additional experiments comparing effects of noise data when employed to complement the %-damaged original network. They are denoted as noisy networks. To measure the network's performances, we used SSL algorithm on prediction problems on possibly co-occurring diseases in the case when there is an outbreak of a certain disease [19]. Leave-one-out validation method is used [25]. The f-scores for all diseases are calculated by (1) except for one target disease. Then, the ROC was obtained by comparing f-scores and PubMed Literatures: presence ('1') or absence ('0') of PubMed literatures is used as a standard for disease association. For 181 diseases, the ROC was similarly calculated. The whole experiment was repeated 10 times.
Results and Disscussion
Results for validity of CLASH
Figure 4(a) presents network density that depicts proportion of edges, which had been recovered through CLASH. It shows that, regardless of the degree of damages, by utilizing external data sources, the proportion of edges have been recovered by 18 %, on average. In case of 20 %-damaged network, 97 % of edges were recovered when comparing with those of reference network (97 % = (0.130/0.134) × 100 %.) Also, it is interesting to see that it is possible to recover severely damaged edges that had been deleted by 80 %. Fig. 4(b) shows comparisons of AUC performances of damaged network and complemented network. From the bar chart on 80 %-damaged network, we can see that CLASH improves the performance up to 0.71 (lifted from 0.55). Considering that the performance of reference network was 0.69, it can be inferred that CLASH has led to improvement in AUC even in the most severely damaged network. For other damaged networks, the comparisons can be similarly interpreted. On the other hand, the noisy networks incurred insignificant degradation or no change in performance to %-damaged networks. (The amount of noisy edges corresponds to those of complemented edges for %-damaged networks.) This shows that CLASH is robust to noisy external source data and preserves the original information.
Results for Complementing Ability of CLASH: a shows that the proportion of edges have been recovered by 18 %, on average. b shows that CLASH improves AUC performance up to 0.79. The p-values for statistical tests for pairwise comparison between %-damaged original network and complemented network are 0.0002, 0.0001, 0.0002 and 0.000, respectively. On the other hand, CLASH is robust to noise: the noisy networks incurred insignificant degradation or no change in performance to %-damaged networks, preserving the original information
Result for utility of CLASH
In this section, we show utility of CLASH by demonstrating its process and typical results for a case disease. Malabsorption syndrome was selected as a target disease out of 181 metabolic diseases. Malabsorption syndrome refers to a wide variety of frequent and uncommon disorders of the process of alimentation in which the intestine's ability to absorb certain nutrients, such as vitamin B12 and iron, into the bloodstream is negatively affected [26, 27]. Fig. 5 presents step-by-step process of CLASH for the target disease. Figure 5(a) shows a reference network of 13 disease nodes which simplifies the whole network of 181 diseases. In the figure, malabsorption syndrome (node 1) has four connections with celiac disease, glucose intolerance, metabolic disease X and diabetes mellitus (node 2, 5, 9, 11, in due order.) The four edges were purposely deleted to show if CLASH successfully recovers the original ones and further compliments the network with new edges from external knowledge found from PubMed comorbidity literatures. This is shown in Fig. 5(b), the original network. Figure 5(c) briefly describes anchoring, scoring and connecting: firstly, the node of malabsorption syndrome anchors at 10 nodes (See anchored diseases [28–37]) which includes the four nodes of the originally associated (node 2, 5, 9, 11) and six nodes of the newly found (node 3, 4, 6, 7, 8, 10). Among them, eight nodes are finally connected which have the highest values of f-score after dropping out two nodes with the lowest scores, node 6 and 7. Figure 5(d) presents the complemented network of four recovered edges and four newly found ones. Solid single line in the network refers to the former and double-line denotes the latter. Consequently, we see that malabsorption syndrome extends its associations with more diseases, hyperhomocysteinemia, hypoglycemia, osteomalacia and insulin resistance (node 3, 4, 8, 10), apart from the originally connected four diseases shown in Fig. 5(a).
Utility of CLASH by demonstrating the process for the malabsorption syndrome: CLASH algorithm complements the network with four recovered edges and four newly found ones. Therefore, malabsorption syndrome extends its associations with more diseases, hyperhomocysteinemia, hypoglycemia, osteomalacia and insulin resistance, apart from the originally connected four diseases. Single solid lines refer to extended edges and double lines refer to original edges. Also notations '†', '*' and '**' denotes associated diseases via PPI, PubMed and multiple paths involving more than one edge, respectively
To validate the utility of newly found edges, we performed disease scoring on reference network in Fig. 5(a) and complemented network in Fig. 5(d), and then compared the top tier ranked up to 10th associated diseases from each network. Figure 6 presents a comparison of disease list obtained from results of reference network and complemented network. Figure 6(b) shows that celiac disease, glucose intolerance, metabolic disease X and diabetes mellitus are highly ranked. If we compare these diseases with those connected to malabsorption syndrome in Fig. 5(d) (node 2, 5, 9, 11), we get an interesting result in which all these diseases are also included in the disease list. On the other hand, it is also notable that four diseases, hyperhomocysteinemia, hypoglycemia, osteomalacia and insulin resistance, that are associated with newly found edges in Fig. 5 (node 3, 4, 8, 10) are included in the list as well. From the results of Figs. 5 and 6, we see that CLASH is able to preserve the originality of the disease network built from PPI information, but at the same time, complements it by utilizing PubMed comorbidity literatures.
Top tier ranked up to 10th associated diseases with malabsorption syndrome: Notations '†', '*' and '**' are identical to those in Fig. 5
In a similar manner, an experiment has been carried out on 181 diseases (Supplemental materials http://www.alphaminers.net.). Table 2 illustrates results for 10 diseases. The first 5 diseases, similar to malabsorption syndrome, are artificially disconnected diseases from the original network of 181 diseases while the last 5 diseases are real disconnected diseases that does not contain any PPI information (not valid).
Table 2 Top tier ranked up to 10th associated diseases
Through results from the experiment, we verified usefulness and effectiveness of CLASH, which uses both original and external data source to find diseases that could co-occur with target diseases.
The research proposes an algorithm, also known as CLASH, which complements or strengthens connections between diseases in a disease network. The proposed algorithm is useful when the original disease network is incomplete and when supplementary information on disease association is available. The verification process for CLASH has been done by applying the algorithm on metabolic diseases. The original disease network was constructed based on PPI information. And through CLASH, disconnected edges were complemented or strengthened by supplemental information obtained from PubMed comorbidity literatures. In the experiment on validity, CLASH not only successfully recovered purposely deleted edges but also improved their performances: It showed full recovery of 20 % damaged edges and an increase of AUC performance from 0.69 to 0.79. In the experiment on utility, the research illustrates how to utilize CLASH through the toy example: In the case of malabsorption syndrome as the target disease, it delineates the process of finding a list of diseases that could co-occur with the target disease. Similar results are also shown with other metabolic diseases.
This research has novelty in following aspects. CLASH is a methodology that preserves the network's originality, but at the same time, complements it by utilizing external information. CLASH has different utility than other methods that integrate multiple data sources in a network-wise fashion. It puts more emphasis on one data source than others: To complement disease-gene information (from biology) with comorbidity information (from medicines), or oppositely, to complement comorbidity information with disease-gene information. Examples of former usage can be found in drug discovery/repositioning in pharmacology while an example of latter usage is inferring disease co-occurrence when practicing. Moreover, these usages are topics for further researches.
Piro RM. Network medicine: linking disorders. Hum Genet. 2012;131(12):1811–20.
Li Y, Agarwal P. A pathway-based view of human diseases and disease relationships. PLoS One. 2009;4(2):e4346.
Kim D, Joung J-G, Sohn K-A, Shin H, Park YR, Ritchie MD, Kim JH. Knowledge boosting: a graph-based integration approach with multi-omics data and genomic knowledge for cancer clinical outcome prediction. J Am Med Inform Assoc. 2015;22(1):109–20.
Sun K, Buchan N, Larminie C, Pržulj N. The integrated disease network. Integr Biol. 2014;6(11):1069–79.
Altaf-Ul-Amin M, Afendi FM, Kiboi SK, Kanaya S. Systems biology in the context of big data and networks. BioMed Res Int. 2014;2014:428570.
Pavlopoulos GA, Secrier M, Moschopoulos CN, Soldatos TG, Kossida S, Aerts J, Schneider R, Bagos PG. Using graph theory to analyze biological networks. BioData Min. 2011;4(10):1–27.
Goh K-I, Cusick ME, Valle D, Childs B, Vidal M, Barabasi A-L. The human disease network. Proc Natl Acad Sci. 2007;104(21):8685–90.
Zhou X, Menche J, Barabási A-L, Sharma A. Human symptoms–disease network. Nat Commun. 2014;5:4212.
Lee D-S, Park J, Kay K, Christakis N, Oltvai Z, Barabási A-L. The implications of human metabolic network topology for disease comorbidity. Proc Natl Acad Sci. 2008;105(29):9880–5.
Hidalgo CA, Blumm N, Barabási A-L, Christakis NA. A dynamic network approach for the study of human phenotypes. PLoS Comput Biol. 2009;5(4):e1000353.
Žitnik M, Janjić V, Larminie C, Zupan B, Pržulj N. Discovering disease-disease associations by fusing systems-level molecular data. Sci Rep. 2013;13:3202.
Shin H, Lisewski AM, Lichtarge O. Graph sharpening plus graph integration: a synergy that improves protein functional classification. Bioinformatics. 2007;23(23):3217–24.
Kim D, Shin H, Sohn K-A, Verma A, Ritchie MD, Kim JH. Incorporating inter-relationships between different levels of genomic data into cancer clinical outcome prediction. Methods. 2014;67(3):344–53.
Tsuda K, Shin H, Schölkopf B. Fast protein classification with multiple networks. Bioinformatics. 2005;21(2):ii59–65.
Sun K, Gonçalves JP, Larminie C. Predicting disease associations via biological network analysis. BMC Bioinformatics. 2014;15(1):304.
Yıldırım MA, Goh K-I, Cusick ME, Barabási A-L, Vidal M. Drug—target network. Nat Biotechnol. 2007;25(10):1119–26.
Kim HU, Sohn SB, Lee SY. Metabolic network modeling and simulation for drug targeting and discovery. Biotechnol J. 2012;7(3):330–42.
Kim D, Shin H, Song YS, Kim JH. Synergistic effect of different levels of genomic data for cancer clinical outcome prediction. J Biomed Inform. 2012;45(6):1191–8.
Shin H, Nam Y, Lee D-g, Bang S. The Translational Disease Network—from Protein Interaction to Disease Co-occurrence. Proc of 4th Translational Bioinformatics Conference (TBC) 2014.
Chapelle O, Schölkopf B, Zien A. Semi-supervised learning, MIT Press; 2006.
Kim J, Shin H. Breast cancer survivability prediction using labeled, unlabeled, and pseudo-labeled patient data. J Am Med Inform Assoc. 2013;20(4):613–8.
Medical Subject Headings (www.ncbi.nlm.nih.gov/mesh, Accessed 5 Jan 2014).
Capobianco E. Comorbidity: a multidimensional approach. Trends Mol Med. 2013;19(9):515–21.
Ambert KH, Cohen AM. A system for classifying disease comorbidity status from medical discharge summaries using automated hotspot and negated concept detection. J Am Med Inform Assoc. 2009;16(4):590–5.
Fukunaga K, Hummels DM. Leave-one-out procedures for nonparametric error estimates. Pattern Anal Mach Intell IEEE Trans. 1989;11(4):421–3.
Ghoshal UC, Mehrotra M, Kumar S, Ghoshal U, Krishnani N, Misra A, Aggarwal R, Choudhuri G. Spectrum of malabsorption syndrome among adults & factors differentiating celiac disease & tropical malabsorption. Indian J Med Res. 2012;136(3):451.
Hayman SR, Lacy MQ, Kyle RA, Gertz MA. Primary systemic amyloidosis: a cause of malabsorption syndrome. Am J Med. 2001;111(7):535–40.
Benson Jr J, Culver P, Ragland S, Jones C, Drummey G, Bougas E. The d-xylose absorption test in malabsorption syndromes. N Engl J Med. 1957;256(8):335–9.
Casella G, Bassotti G, Villanacci V, Di Bella C, Pagni F, Corti GL, Sabatino G, Piatti M, Baldini V. Is hyperhomocysteinemia relevant in patients with celiac disease. World J Gastroenterol. 2011;17(24):2941–4.
Jenkins D, Gassull M, Leeds A, Metz G, Dilawari J, Slavin B, Blendis L. Effect of dietary fiber on complications of gastric surgery: prevention of postprandial hypoglycemia by pectin. Gastroenterology. 1977;73(2):215–7.
Penckofer S, Kouba J, Wallis DE, Emanuele MA. Vitamin D and diabetes let the sunshine in. Diabetes Educ. 2008;34(6):939–54.
Förster H. Hypoglycemia. Part 4. General causes, physiological newborn hyperglycemia, hyperglycemia in various illnesses, metabolic deficiency, and metabolic error. Fortschr Med. 1976;94(16):332–8.
Dedeoglu M, Garip Y, Bodur H. Osteomalacia in Crohn's disease. Arch Osteoporos. 2014;9(1):1–3.
Traber MG, Frei B, Beckman JS. Vitamin E revisited: do new data validate benefits for chronic disease prevention? Curr Opin Lipidol. 2008;19(1):30–8.
Viganò A, Cerini C, Pattarino G, Fasan S, Zuccotti GV. Metabolic complications associated with antiretroviral therapy in HIV-infected and HIV-exposed uninfected paediatric patients. Expert Opin Drug Saf. 2010;9(3):431–45.
Tosiello L. Hypomagnesemia and diabetes mellitus: a review of clinical implications. Arch Intern Med. 1996;156(11):1143–8.
van Thiel DH, Smith WI, Rabin BS, Fisher SE, Lester R. A syndrome of immunoglobulin a deficiency, diabetes mellitus, malabsorption, and a common HLA haplotype: immunologic and genetic studies of forty-three family members. Ann Intern Med. 1977;86(1):10–9.
HJS would like to gratefully acknowledge support from the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2015R1D1A1A01057178/2012-0000994). KWL acknowledges the support from the National Research Foundation of Korea(NRF) Grant funded by the Korean Government(MSIP) (No.2015R1A5A7037630).
Publication for this article has been funded by National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (No. 2015R1D1A1A01057178/2012-0000994). This article has been published as part of BMC Medical Informatics and Decision Making Volume 16 Supplement 3, 2016. Selected articles from the 5th Translational Bioinformatics Conference (TBC 2015): medical genomics. The full contents of the supplement are available online https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-16-supplement-3.
The data can be found in PharmDB (http://pharmdb.org/). PharmDB is a tripartite pharmacological network database of human diseases, drugs and proteins, which compiles and integrates nine existing interaction databases. (Access date: 2014.01.05).
HJS designed the idea, wrote the manuscript and supervised the study process. YHN and MJK analyzed the data and implemented the system. KWL and all the other authors read and approved the final manuscript.
Department of Industrial Engineering, Ajou University, Wonchun-dong, Yeongtong-gu, Suwon, 443-749, South Korea
Yonghyun Nam
, Myungjun Kim
& Hyunjung Shin
Department of Digital Media, Ajou University, Wonchun-dong, Yeongtong-gu, 443-749, Suwon, South Korea
Kyungwon Lee
Search for Yonghyun Nam in:
Search for Myungjun Kim in:
Search for Kyungwon Lee in:
Search for Hyunjung Shin in:
Correspondence to Hyunjung Shin.
Graph-based Semi-Supervised Learning
Disease network is a graph, G = (V, W), that describes connection between diseases with nodes and edges. In a disease network, a node denotes a disease and an edge denotes disease-disease association. Given a disease network, graph-based Semi-Supervised Learning (SSL) is employed to calculate the scores when a target diease is given. In the present study, the target disease is labeled as '1', and other diseasese are labeled as '0' (unlabeled). With this setting on a disease network, SSL provides the scores for diseases, in terms of f-score. The algorithm is summarized as follows, and more details can be found in [19–21].
In graph-based SSL, a connected graph G = (V, W) is constructed where the nodes V represent the labeled and unlabeled data points while the edges W reflect the similarity between data points. In binary classification problem, given n(=n l + n u ) data points from the sets of labeled \( {S}_L=\left\{{\left({x}_i,{y}_i\right)}_{i=1}^{n_l}\right\} \) and unlabeled \( {S}_U=\left\{{\left({x}_j\right)}_{j = {n}_l+1}^n\right\}, \) the labeled nodes are set to y l ∈ {−1, + 1}, while the unlabeled nodes are set to zero (y u = 0). However, for scoring problem in the proposed algorithm, the n l nodes are set to a unary label y l ∈ {1} while the unlabeled n u nodes are set to zero (y u = 0). Resulting the learning process is to assign scores f u T = (f nl + 1, …, f n )T on nodes V U . The edges between the two nodes v i and v j . are measured by the Gaussian function
$$ {w}_{ij}=\left\{\begin{array}{cc}\hfill { \exp}^{- dist\ \left({v}_i,\ {v}_j\right)/{\sigma}^2}\ \hfill & \hfill if\ i\sim j\hfill \\ {}\hfill 0\hfill & \hfill otherwise\hfill \end{array}\right. $$
where i ~ j indicates that the two nodes are connected, and the value of the similarity is represented by a matrix W = {w ij }. Then the label information can propagate from labeled node v i to unlabeled node v j when the value of w ij is large, their outputs are lely to be close. The algorithm will output an n-dimensional real-valued vector f = [f l T f u T ] = (f 1, …, f l , f l + 1, …, f n = l + u )T. There are two assumptions: a loss function (f i should be close to given label of y i in labeled nodes) and label smoothness (overall, f i should not be too different from f i for the neighboringes.hese assumptions are reflected in the value of f by minimizing the following quadratic function:
$$ \underset{f}{ \min }\ {\left(f-y\right)}^T\left(f-y\right)+\mu {f}^TLf $$
where \( \boldsymbol{y}={\left[{y}_1, \dots,\ {y}_{n_l},\ 0, \dots,\ 0\right]}^T \) and the matrix L, which is known as the graph Laplacian matrix, is defined as L = D − W where D = diag(d i ), \( {d}_i={\displaystyle \sum_i}{w}_{ij} \). The parameter μ trades off loss and smoothness. Thus, the solution of this problem becomes
$$ f={\left(I+\mu L\right)}^{-1}y $$
Nam, Y., Kim, M., Lee, K. et al. CLASH: Complementary Linkage with Anchoring and Scoring for Heterogeneous biomolecular and clinical data. BMC Med Inform Decis Mak 16, 72 (2016) doi:10.1186/s12911-016-0315-2
External Information
Original Network
Candidate Node
Reference Network | CommonCrawl |
\begin{document}
\title[Images of locally finite $\mathcal{E}$-derivations]{Images of locally finite $\mathcal{E}$-derivations of bivariate polynomial algebras} \author{Hongyu Jia, Xiankun Du and Haifeng Tian} \address{X. Du: School of Mathematics, Jilin university, Changchun 130012, P. R. China} \email{[email protected]} \address{H. Jia: School of Mathematics, Jilin university, Changchun 130012, P. R. China} \email{[email protected]} \address{H. Tian: School of Mathematics, Jilin university, Changchun 130012, P. R. China} \email{[email protected]} \begin{abstract} This paper presents an $\mathcal{E}$-derivation analogue of a result on derivations due to van den Essen, Wright and Zhao. We prove that the image of a locally finite $K$-$\mathcal{E}$-derivation of polynomial algebras in two variables over a field $K$ of characteristic zero is a Mathieu subspace. This result together with that of van den Essen, Wright and Zhao confirms the LFED conjecture in the case of polynomial algebras in two variables. \end{abstract} \subjclass[2010]{14R10} \keywords{$\mathcal{E}$-derivation; LFED conjecture; locally finite; Mathieu subspace; polynomial endomorphism.} \thanks{Corresponding author: Haifeng Tian} \maketitle
\section{Introduction}
Images of derivations have been studied recently by several authors because of close relationship with the Jacobian conjecture. Let $K$ denote a field of characteristic $0$. It is proved in~\cite{vanden2010} that the Jacobian conjecture for $K[x,y]$ is equivalent to the statement that the image $\IM D$ is a Mathieu subspace of $K[x,y]$ for any $K$-derivation $D$ of $K[x,y]$ such that $1\in\IM D$ and $\operatorname{div}D =0$, where $\operatorname{div}D =\partial_{x}D(x)+\partial_{y}D(y) $. The image of a $K$-derivation $D$ with $\operatorname{div}D=0$ of $K[x,y]$ does not need to be a Mathieu subspace (if $1 \notin \IM D$)~\cite{sun2017}. The Jacobian conjecture for $K[x,y]$ can also be reformulated as follows: a $K$-derivation $D$ of $K[x,y]$ which satisfies $1\in\IM D$ and $\operatorname{div}D=0$ is locally finite~\cite[Conjecture 9.8.2]{now1994}. For locally finite derivations, van den Essen, Wright and Zhao proved the following result.
\begin{theorem} \,\cite[Theorem 3.1]{vanden2010}\label{thewz} Let $K$ be a field of characteristic $0$ and let $D$ be any locally finite $K$-derivation of $K[x,y]$. Then $\IM D$ is a Mathieu subspace of $K[x,y]$. \end{theorem}
The notion of Mathieu subspaces was introduced by Zhao in~\cite{zhao2010}, based on study on the Jacobian conjecture
and inspired by the Mathieu conjecture that implies the Jacobian conjecture ~\cite{mathieu1997}. The concept was subsequently extended to noncommutative algebras in ~\cite{zhao2012}. Mathieu subspaces are also called Mathieu-Zhao subspaces
as suggested by van den Essen~\cite{essen2014}. The general references for Mathieu subspaces are~\cite{db,essenbk21,zhao2012}.
In this paper we will consider only the commutative case.
\begin{definition} \label{mz} Let $A$ be a commutative $K$-algebra. A subspace $M$ of $A$ is called a Mathieu subspace (or Mathieu-Zhao subspace) of $A$, if for all $f\in M$ such that $f^{m}\in M$ for all $m>0$ and for all $g\in A$, there exists an integer $n$ (depending on $f$ and $g$) such that $f^{m}g\in M$ for all $m>n$. \end{definition}
It is clear that ideals are Mathieu subspaces. Unlike ideals, Mathieu subspaces even in
univariate polynomial algebras are not completely determined yet (see ~\cite{vanden2012,vanden2016}).
Various facts and problems in affine algebraic geometry are related to Mathieu subspaces. A key issue is to prove that kernels and images of some special linear maps such as derivations and more general differential operators are Mathieu subspaces, though the verification of a Mathieu subspace is generally difficult (see~\cite{essen2020} and~\cite[Ch.~5]{essenbk21}, and the references given there). Among others, $\mathcal{E} $-derivations were also considered by Zhao~\cite{zhao2018}. A $K$-$\mathcal{E} $-derivation of a $K$-algebra is a linear map $\delta$ such that ${\rm id}-\delta$ is an algebra homomorphism. Zhao formulated the following conjecture for general associative algebras~\cite{zhao2018}, though we focus on the case of polynomial algebras in this paper.
\begin{conjecture} [The LFED Conjecture]Let $K$ be a field of characteristic zero. Images of locally finite $K$-derivations and $K$-$\mathcal{E}$-derivations of $K$-algebras are Mathieu subspaces. \end{conjecture}
Recently, van den Essen and Zhao~\cite{vanden2018} found that both the case of derivations and the case of $\mathcal{E}$-derivations of the LFED conjecture for Laurent polynomial algebras imply a remarkable theorem of Duistermaat and van der Kallen~\cite{duikal}. This theorem states that the subspace consisting of Laurent polynomials without constant terms is a Mathieu subspace of $\mathbb{C}[X,X^{-1}]$, where $X$ denotes the $n$ variables $x_{1},x_2\dots,x_{n}$ and $X^{-1}$ denotes $x_{1}^{-1},x_2^{-1},\dots,x_{n}^{-1}$. The case of derivations of the LFED conjecture for Laurent polynomial algebras was proved earlier by Zhao~\cite{zhao20172}, using the theorem of Duistermaat and van der Kallen.
The LFED conjecture has been proved for some special cases. Zhao proved the LFED conjecture for finite dimensional algebras ~\cite{zhao20171} and for algebraic derivations and $\mathcal{E}$-derivations of integral domains of characteristic zero~\cite{zhao2018}. The LFED conjecture was also established for the Laurent polynomial algebra in one or two variables by Zhao~\cite{zhao20172}, the field $K(X)$ of rational functions, the formal power series algebra $K[[X]]$ and the Laurent formal power series algebra $K[[X]][X^{-1}]$ by van den Essen and Zhao~\cite{vanden2018}.
The LFED conjecture for polynomial algebras is the most interesting case, but few results are known. The LFED conjecture for the univariate polynomial algebra $K[x]$ was proved by Zhao~\cite{zhao2017}. For $K[x,y]$, the case of derivations was proved by van den Essen et al.\@ (Theorem~\ref{thewz}), but the case of $\mathcal{E} $-derivations remained unknown. For polynomial algebras in three variables, the conjecture was verified for some locally nilpotent derivations ~\cite{liu2019,sun2021} and linear derivations and $\mathcal{E}$-derivations ~\cite{tian}. For general $K{[X]}$, the conjecture was proved for diagonal derivations and $\mathcal{E}$-derivations, and for monomial-preserving derivations in~\cite{vanden2010,vanden2017}.
The aim of this paper is to present an analogue of Theorem~\ref{thewz} for $\mathcal{E}$-derivations of $K[x,y]$ by proving the following result.
\begin{theorem}\label{thmain}Let $K$ be a field of characteristic $0$ and let $\delta$ be any locally finite $K$-$\mathcal{E}$-derivation of $K[x,y]$. Then $\IM \delta$ is a Mathieu subspace of $K[x,y]$. \end{theorem}
Theorem~\ref{thewz} and~\ref{thmain} together confirm the LFED conjecture for polynomial algebras in two variables.
Throughout this paper $K$ denotes a field of characteristic $0$, and $X$ denotes the $n$ variables $x_{1},x_2,\dots,x_{n}$.
The rest of this paper is devoted to proving Theorem~\ref{thmain}. In Section 2, we classify the locally finite endomorphisms of $\mathbb{C}[x,y]$ into seven classes under conjugation. The plan is to prove Theorem~\ref{thmain} by means of showing it for the corresponding seven classes of $\mathcal{E}$-derivations. In Section 3, we deal with the most complex case: the fourth class. In Section 4, we first reduce the LFED conjecture of $K[X]$ to that of $\mathbb{C}[X]$. Then we finish the proof of Theorem~\ref{thmain} by examining the seven classes of $\mathcal{E}$-derivations individually.
\section{Classification of locally finite endomorphisms of $\mathbb{C}[x,y]$}
Let $F= (F_{1}, F_2,\dots, F_{n} )$ be a polynomial endomorphism of the affine space $K^{n}$. Then there is a unique endomorphism $F^*$ of the polynomial algebra $K[X]$ such that $F^*(x_{i})=F_{i}$ for $i=1,2,\dots,n$. Polynomial endomorphisms of $K^{n}$ correspond one-to-one with endomorphisms of $K[X]$ under $F\mapsto F^*$, and ${(F\circ G)}^*=G^*\circ F^* $ for all polynomial endomorphisms $F,G$ of $K^{n}$ (see~\cite{essenbk}).
A $K$-linear map $\phi$ of a $K$-vector space $V$ is called locally finite if for each $v\in V$ the subspace generated by $\{\phi^{i}(v)\mid i\in \mathbb{N}\}$ is finite dimensional~\cite[Definition 1.3.5(i)]{essenbk}.
According to~\cite[Definition 1.4 and Theorem 1.1]{fur2007} a polynomial endomorphism $F$ of the affine space ${K}^{n}$ is locally finite if and only if the endomorphism $F^*$ of $K[X]$ is locally finite.
By a result of Friedland and Milnor~\cite{Fri89}, Furter~\cite{fur1999} proved that each locally finite polynomial automorphism of $\mathbb{C}^{2}$ is conjugate to a triangular automorphism. Maubach~\cite[Lemma 2.16]{mau2015} classified, up to conjugation by triangular automorphisms, the triangular automorphisms of $K^{2}$ into two classes: affine and sequential.
Based on~\cite{fur1999,fur2007,mau2015}, we will classify the locally finite endomorphisms of $\mathbb{C}[x,y]$ under conjugation into seven classes for our purpose. We work with $\mathbb{C}$ in this section, thought some results are valid for arbitrary fields of characteristic zero.
Denote by $\mathbb{N}$ the set of nonnegative integers and by $\mathbb{N}^{*}$ the set of positive integers. Let $K^*=K\setminus \{0\}$ for any field $K$.
\begin{theorem}\label{lem4forms} Let $\phi$ be a locally finite endomorphism of $\mathbb{C}[x,y]$. Then up to conjugation $\phi$ satisfies one of the following conditions: \begin{enumerate} \item $\phi(x)=bx$ and $\phi(y)=ay$, for $a,b\in\mathbb{C}^{*}$;\label{lem4forms1}
\item $\phi(x)=bx$ and $\phi(y)=y+1$, for $b\in\mathbb{C}^{*}$;\label{lem4forms2}
\item $\phi(x)=b^{s}x+ay^{s}$ and $\phi(y)=by$, where $s\in\mathbb{N}^{*}$, $a\in\mathbb{C}$, $b\in\mathbb{C}^{*}$, and $b$ is not a root of unity;\label{lem4forms3}
\item $\phi(x)=b^{s}x+y^{s}p(y^{r})$ and $\phi(y)=by$, where $r\in \mathbb{N}^{*}$, $s\in\mathbb{N}$, $b$ is a primitive $r$th root of unity, and $p(y)\in\mathbb{C}[y]$ is monic;\label{lem4forms4}
\item $\phi^{2}=\phi^{3}$;\label{lem4forms5}
\item $\phi(x)=\lambda x+yg$ and $\phi(y)=0$, for $\lambda\in \mathbb{C}^{*}$ and $g\in\mathbb{C}[x,y]$;\label{lem4forms6}
\item $\phi(x)=x+\lambda+yg$ and $\phi(y)=0$, for $\lambda\in \mathbb{C}^{*}$ and $g\in\mathbb{C}[x,y]$.\label{lem4forms7} \end{enumerate} \end{theorem}
To prove the theorem, we need to generalize~\cite[Lemma 4.4]{fur2007} by removing the assumption $F(0)=0$. Our proof follows that of~\cite[Lemma 4.4]{fur2007}.
\begin{lemma}\label{fenjie} Let $\phi$ be a locally finite endomorphism of $\mathbb{C}[x,y]$ that is not invertible. Then there exist homomorphisms $\mu:\mathbb{C}[z]\rightarrow\mathbb{C}[x,y]$ and $\nu:\mathbb{C}[x,y]\rightarrow\mathbb{C}[z]$ such that $\phi=\mu\nu$ and $\nu\mu(z)=az+b$ for some $a,b\in\mathbb{C} $. \end{lemma}
\begin{proof} By~\cite[Proposition 1.1]{fur2007} the Jacobian determinant $J(\phi (x),\phi(y))=0$. By~\cite[Theorem 1.4]{now1988}, there exist $v_{1},v_{2} \in\mathbb{C}[z]$ and $u\in\mathbb{C}[x,y]$ such that $\phi(x)=v_{1}(u)$ and $\phi(y)=v_{2}(u)$. Let $\mu$ be the homomorphism from $\mathbb{C}[z]$ to $\mathbb{C}[x,y] $ defined by $\mu(z)=u$, and let $\nu$ be the homomorphism from $\mathbb{C}[x,y]$ to $\mathbb{C}[z] $ defined by $\nu(x)=v_{1}(z)$ and $\nu(y)=v_{2}(z)$. Then $\mu\nu(x)=\mu(v_{1}(z))=v_{1}(\mu(z))=v_{1} (u)=\phi(x)$. Similarly, $\mu\nu(y)= \phi(y)$. Hence $\phi=\mu\nu$.
Let $\nu\mu(z)=f(z)$. Then $f(z)=u(v_1(z),v_2(z))$, and \[ \phi^{n}(u)=\phi^{n}\mu(z)=\mu{(\nu\mu)}^{n}(z)=\mu(f^{n}(z))=f^{n}(u), \] where $f^n$ denotes the polynomial composition of $f$ with itself $n$ times. Suppose that $\deg f(z)>1$. Then $\deg u(x,y)\geq1$ and $\deg\phi^{n}(u)={(\deg f(z))}^{n}\deg u $ for all $n\in{\mathbb{N}}^*$, which implies that ${\{\deg\phi^{n} (u) \}}_{n\geq1}$ is unbounded. But that is not possible: since $\phi$ is locally finite, ${\{\deg\phi^{n} (u) \}}_{n\geq1}$ must be bounded. Therefore $\deg f(z)\leq1$, and so $\nu\mu(z)=f(z) =az+b$ for some $a,b\in\mathbb{C}$. \end{proof}
We conclude this section with the proof of Theorem~\ref{lem4forms}.
\begin{proof} [Proof of Theorem~\ref{lem4forms}]If $\phi$ is an automorphism, $\phi$ is conjugate to one of (1)--(4) by~\cite[Lemma 2.16]{mau2015}.
If $\phi$ is not invertible, then we have $\phi=\mu\nu$ and $\nu\mu(z)=az+b$ for some $a,b\in\mathbb{C}$, as in Lemma~\ref{fenjie}.
If $a=0$, then ${(\nu\mu)}^{2}=\nu\mu$. Thus $\phi^{3}=\mu{(\nu\mu)}^{2}\nu =\mu(\nu\mu)\nu=\phi^{2}$. This is case (5).
If $a\neq0$, then $\nu\mu$ is an automorphism of $\mathbb{C}[z]$, which implies $\nu$ is an epimorphism. Let $\pi:\mathbb{C}[x,y]\rightarrow \mathbb{C}[z]$ be the epimorphism defined by $\pi(x)=z,~\pi(y)=0$. By ~\cite[Epimorphism theorem]{ab1975}, there exists an automorphism $\delta:\mathbb{C}[x,y]\rightarrow\mathbb{C}[x,y]$ such that $\pi=\nu\delta$.
Let $\psi=\delta^{-1}\phi\delta$. Then \[ \psi(y)=\delta^{-1}\phi\delta(y)=\delta^{-1}\mu\nu\delta(y)=\delta^{-1}\mu \pi(y)=0. \] Write $\psi(x)=f+yg$ for some $f\in\mathbb{C}[x]$ and $g\in\mathbb{C}[x,y]$. Then $\pi\psi^{n}(x)=f^{n}(z)$ for all $n\in\mathbb{N}$. Since $\psi$ is locally finite, ${\{\deg\pi\psi^{n}(x)\}}_{n\geq 1}$ is bounded, which implies $\deg f\leq1$. Thus $f=\lambda_{1}x+\lambda_{2}$ for some $\lambda_{1} , \lambda_{2}\in\mathbb{C}$. Therefore, \[ \psi(x)=\lambda_{1}x+\lambda_{2}+yg\ \text{and}\ \psi(y)=0, \] for $\lambda_{1},\lambda_{2}\in\mathbb{C}$ and $g\in\mathbb{C}[x,y]$.
If $\lambda_{1}=0$, then $\psi^{2}(x)=\lambda_2=\psi^{3}(x)$ and so $\phi^{2}=\phi^{3}$. This is case (5).
Suppose that $\lambda_{1}\neq0$. We will distinguish several cases.
If $\lambda_{2}=0 $, then case (6) applies.
If $\lambda_{2}\ne0 $ and $\lambda_{1}=1 $, then case (7) applies.
To complete the proof, it only remains to consider the case $\lambda_{2}\ne0 $ and $\lambda_{1}\neq1$. Define the automorphism $\eta$ of $\mathbb{C}[x,y]$ by $\eta(x)=x+{(1-\lambda_{1})}^{-1}\lambda_{2}$ and $\eta(y)=y. $ Then $\eta \psi\eta^{-1}(x) =\lambda_{1}x+yg(x+{(1-\lambda_{1})}^{-1}\lambda_{2},y) $ and $\eta\psi\eta^{-1}(y) =0 $. So case (6) applies. \end{proof}
\section{Theorem \ref{thmain} for $\delta = {\rm id} - \phi$ where $\phi$ satisfies \ref{lem4forms}(\ref{lem4forms4})}
In this section, we first determine the image of $\delta = {\rm id} - \phi$ where $\phi$ satisfies \ref{lem4forms}(\ref{lem4forms4}). After that, we prove that this image is a Mathieu subspace of $\mathbb{C}[x,y]$. Furthermore, we generalize the second result to arbitary fields $K$ of characteristic $0$.
For $\beta =(\beta _{1},\beta _{2},\dots ,\beta _{n})\in \mathbb{N}^{n}$, write $X^{\beta }$ for $x_{1}^{\beta _{1}}x_{2}^{\beta _{2}}\cdots x_{n}^{\beta _{n}}$.
Denote by $\LT(f)$ the leading term of $f\in K[X]\setminus \{0\}$ with respect to a fixed monomial well-ordering on $K[X]$ (see~\cite{Cox}).
\begin{lemma}\label{lemmaeta} Let $S$ be a subspace of $K[X]$ spanned by monomials in $K[X]$ and $\eta:S\rightarrow K[X]$ be a $K$-linear map such that $\eta(S)\subseteq S$. If \begin{equation}\label{eqLT}
\text{for all}~X^{\beta}\in S, \LT(\eta(X^{\beta}))=c_{\beta}X^{\beta}~\text{for some}~c_{\beta}\in K^{\ast} \end{equation} with respect to a fixed monomial well-ordering in $K[X]$, then $S=$ $\eta(S)$. \end{lemma}
\begin{proof} Suppose that, on the contrary, $S\setminus\eta(S)$ is not empty.
Then we choose $X^{\alpha_{0}}$ being the least element in $S\setminus \eta(S)$ with respect to a monomial well-ordering in $K[X]$. Since $\eta(S)\subseteq S$, we can write $\eta(X^{\alpha_{0}})=a X^{\alpha_{0}}+\sum_{i=1}^{m}a_{i}X^{\alpha_{i}}$ with $a \in K$, $a_{i}\in K^{\ast}$, and distinct $X^{\alpha_0},X^{\alpha_1},\dots,X^{\alpha_m}\in S$. By (\ref{eqLT}), we have $a\ne 0$ and $X^{\alpha_{0}}>X^{\alpha_{i}}$ for all $1\leq i\leq m$. Then $X^{\alpha_{i}}\in\eta(S)$ for all $1\leq i\leq m$ by the minimality of $X^{\alpha_{0}}$. Therefore, $X^{\alpha_{0}}=a^{-1} (\eta(X^{\alpha_0})-\sum_{i=1}^{m}a_{i}X^{\alpha_{i}})\in\eta(S)$, a contradiction. \end{proof}
For a subset $S$ of a $K$-algebra $A$, denote by $\spn_{K} S $ and $\langle S\rangle_A$ the subspace and the ideal generated by $S$, respectively. We write $\langle S\rangle$ instead of $\langle S\rangle_{\mathbb{C}[x,y]}$.
\begin{lemma} \label{lastform1} Let $\phi$ be the endomorphism in Theorem~\ref{lem4forms}(\ref{lem4forms4}) and $\delta={\rm id}-\phi$. Then \begin{equation}\label{eqidinim} \langle y^{s}p(y^{r}) \rangle \subseteq \IM\delta. \end{equation} \end{lemma}
\begin{proof}
By induction on $i$, one can show that $\phi^i (x) = b^{si}x + i b^{s(i-1)} y^s p(y^r)$ and $\phi^i (y) = b^i y$. So if we define $q = r b^{-s} y^s p(y^r)$, then $$ \phi^r(x) = x + q~\text{and}~\phi^r(y) = y. $$ Let $\delta' = {\rm id} - \phi^r$. Then $\delta' = \delta({\rm id}+\phi+\phi^2+\cdots+\phi^{r-1})$, so $\IM \delta' \subseteq \IM\delta$. Therefore, it suffices to show that $\langle y^{s}p(y^{r})\rangle \subseteq \IM\delta'$.
Define a linear map $\eta:\langle x \rangle\rightarrow\mathbb{C}[x,y]$ by $\eta(x^{m} y^{n})=x{q}^{-1}\delta'(x^{m}y^{n})$ for all $x^{m}y^{n}\in \langle x \rangle $. Then \[ \eta(x^{m}y^{n})=\frac{x}{q}\Big(x^my^n-\big(x+q\big)^my^n\Big) =-\sum_{i=1}^{m}\binom{m}{i}x^{m-i+1}q^{i-1}y^n, \] where the sum is zero whenever its lower limit is bigger than its upper limit. This implies that $\eta(\langle x \rangle) \subseteq \langle x \rangle$, and that $\LT\big(\eta(x^{m}y^{n})\big) = -mx^m y^n$ with respect to the lex order in $\mathbb{C}[x,y]$. Thus by Lemma~\ref{lemmaeta}, $\eta(\langle x \rangle) = \langle x \rangle$. Hence $$ \delta' (\langle x \rangle) = q x^{-1} \eta(\langle x \rangle) = q x^{-1} \langle x \rangle = \langle y^{s}p(y^{r}) \rangle, $$ which gives the desired result. \end{proof}
\begin{corollary} \label{lastform} Let $\phi$ be the endomorphism in Theorem~\ref{lem4forms}(\ref{lem4forms4}) and $\delta={\rm id}-\phi$. Then \begin{equation}\label{eqim=c+}
\IM\delta=C+\langle y^{s}p(y^{r})\rangle, \end{equation} where $C=\spn_{\mathbb{C}}\{x^{m}y^{n}\mid m,n\in\mathbb{N}\text{ and }r\nmid ms+n\}$. \end{corollary}
\begin{proof} For any $m,n\in\mathbb{N}$, a direct computation shows that $$ \delta(x^{m}y^{n}) \equiv (1-b^{ms+n})x^{m}y^{n} \pmod{\langle y^s p(y^r) \rangle}. $$ Since $1 - b^{ms+n} = 0$, if and only if $r \mid ms + n$, we infer from lemma \ref{lastform1} that $\IM \delta$ is as given. \end{proof}
So we have determined the image of $\delta = {\rm id} - \phi$ where $\phi$ satisfies \ref{lem4forms}(\ref{lem4forms4}). We advance with proving that this image is a Mathieu subspace of $\mathbb{C}[x,y]$. For that purpose, we first formulate and prove two lemmas.
Write $\cfx{f}{i}$ for the coefficient of $x^i$ of an element of $f \in K[x,y]$, viewed as polynomial over $K[y]$. Let $\bar{K}$ be an algebraic closure of $K$.
\begin{lemma} \label{lastform2} Let $r$ be a positive integer, $s$ a nonnegative integer, and $p(y)\in K[y]$. Suppose that $f \in K[x,y]$, such that $f^n \notin \langle y^s p(y^r) \rangle_{K[x,y]}$ for all $n \in \mathbb{N}$. Then there exist a factor $q$ of $y^sp(y^r)$, and an $i \in \mathbb{N}$, such that the following holds. Either $q = y$ or $q = y^r - \lambda$ for some $\lambda \in \bar{K}^*$, and $$ q \nmid \cfx{f}{i} ~\text{and}~ q \mid \cfx{f^j}{ij} - \big({\cfx{f}{i}}\big)^j ~\text{for all $j > 0$}. $$ \end{lemma}
\begin{proof} Suppose first that $f(x,\alpha) = 0$ for all roots $\alpha \in \bar{K}$ of $y^sp(y^r)$. Take $j$ such that the multiplicity of every root of $y^sp(y^r)$ is bounded from above by $j$. Then $f^j \in \langle y^sp(y^r) \rangle$. Contradiction.
Suppose next that $f(x,\alpha) \ne 0$ for some root $\alpha \in \bar{K}$ of $y^sp(y^r)$. Then we can choose a factor $q$ of $y^sp(y^r)$ with this root $\alpha$, such that either $q = y$ or $q = y^r - \lambda$ for some $\lambda \in \bar{K}^*$. As $f(x,\alpha) \ne 0 = q(\alpha)$, there exists $i$ such that $q \nmid \cfx{f}{i}$. Take $i$ maximal (or minimal) as such. If we compute $\cfx{f^j}{k}$ modulo $q$ for all $k \ge ij$ ($k \le ij$) by induction on $j$ by way of discrete convolution, then we obtain that $\cfx{f^j}{k} \equiv 0 \pmod{q}$ for all $k > ij$ ($k < ij$) and $\cfx{f^j}{ij} \equiv \big(\cfx{f}{i}\big)^j \pmod{q}$, for all $j > 0$. \end{proof}
\begin{lemma} \label{lastform34} Let $r$ be a positive integer. Take $S = \{y,y^2,\ldots,y^{r-1}\}$, and suppose that $\lambda \in K$ and $h \in K[y]$, such that $h^j \in \spn_{K} S + \langle y^r - \lambda \rangle_{K[y]}$ for all $j > 0$. Then $h^r \in \langle y^r - \lambda \rangle_{K[y]}$. \end{lemma}
\begin{proof} An arbitrary element of $S$ is of the form $y^i$, where $0 < i < r$. Furthermore, the matrix of the multiplication by $y^i$ modulo $y^r - \lambda$ as a $K$-linear map, with respect to the basis $\mathcal{B}$ consisting of the residue classes of $1,y,y^2,\ldots,y^{r-1}$, is $$ \left(\begin{array}{cc} 0_{i,r-i} & \lambda I_{i} \\ I_{r-i} & 0_{r-i,i} \end{array} \right), $$ i.e.\@ a matrix with trace $0$.
Let $M$ be the matrix of the multiplication by $h$ modulo $y^r - \lambda$, with respect to basis $\mathcal{B}$. Then for all $j > 0$, the matrix $M^j$ of the multiplication by $h^j$ modulo $y^r - \lambda$ (with respect to basis $\mathcal{B}$) has trace $0$. But that means that $M^r = 0$. So $h^r \cdot 1 \in \langle y^r - \lambda \rangle_{K[y]}$. \end{proof}
We finally prove that the image $C+\langle y^{s}p(y^{r})\rangle$ in Corollary \ref{lastform} is a Mathieu subspace of $\mathbb{C}[x,y]$.
\begin{theorem}\label{C+p} Let $r$ be a positive integer, $s$ a nonnegative integer, and $p(y)\in K[y]$. Take \[ C = \spn_{K}\{x^{m}y^{n}\mid m,n\in\mathbb{N}\text{ and }r \nmid ms + n\}. \] Then $C+\langle y^{s}p(y^{r})\rangle_{K[x,y]}$ is a Mathieu subspace of $K[x,y]$. \end{theorem}
\begin{proof} Suppose that $f \in K[x,y]$ such that $f^n \in C+\langle y^{s}p(y^{r})\rangle_{K[x,y]}$ for all $n > 0$. Suppose first that $f^N \in \langle y^s p(y^r) \rangle_{K[x,y]}$ for some $N \in \mathbb{N}$. Then $$ g f^m \in \langle y^s p(y^r) \rangle_{K[x,y]} \subseteq C+\langle y^{s}p(y^{r})\rangle_{K[x,y]} ~\text{for all}~ g \in K[x,y] ~\text{and all}~ m \ge N, $$ in agreement with Definition \ref{mz}.
Suppose next that $f^n \notin \langle y^s p(y^r) \rangle_{K[x,y]}$ for all $n \in \mathbb{N}$. We will derive a contradiction. Take $q \mid y^s p(y^r)$ and $i \in \mathbb{N}$ as in Lemma \ref{lastform2}.
Suppose first that $q = y$. Then $f^r \in C + \langle y \rangle_{K[x,y]}$, so $$ \cfx{f^r}{ir} \in \spn_{K}\{y^n \mid n \in \mathbb{N}\text{ and }r \nmid irs + n\} + \langle y \rangle_{K[y]} = \langle y \rangle_{K[y]}. $$ From Lemma \ref{lastform2}, it follows that $y \mid \big(\cfx{f}{i}\big)^r$ and $y \nmid \cfx{f}{i}$, which is impossible.
Suppose next that $q = y^r - \lambda$ for some $\lambda \in \bar{K}^{*}$. Let $S = \{y,y^2,\ldots,y^{r-1}\}$. Then $f^{jr} \in C + \langle y^r - \lambda \rangle_{\bar{K}[x,y]}$, so \begin{align*} \cfx{f^{jr}}{ijr} &\in \spn_{\bar{K}}\{y^n \mid n \in \mathbb{N}\text{ and }r \nmid ijrs + n\} + \langle y^r - \lambda \rangle_{\bar{K}[y]} \\ &= \spn_{\bar{K}} S + \langle y^r - \lambda \rangle_{\bar{K}[y]}, ~\text{for all $j > 0$}. \end{align*} Let $h = \big(\cfx{f}{i}\big)^r$. From Lemma \ref{lastform2}, we deduce that $y^r - \lambda \mid \cfx{f^{jr}}{ijr} - h^j$ for all $j > 0$, so $h^j \in \spn_{\bar{K}} S + \langle y^r - \lambda \rangle_{\bar{K}[y]}$ for all $j > 0$. On account of Lemma \ref{lastform34}, $h^r \in \langle y^r - \lambda \rangle_{\bar{K}[y]}$. So $$ y^r - \lambda \mid h^r = \big(\cfx{f}{i}\big)^{r^2}. $$ As $\gcd\{y^r - \lambda, \allowbreak ry^{r-1}\} = 1$, it follows that $y^r - \lambda$ is square-free. So $y^r - \lambda \mid \cfx{f}{i}$. This contradicts Lemma \ref{lastform2}. \end{proof}
The proof of Theorem \ref{C+p} works for characteristic $> r$ as well. Theorem \ref{C+p} holds in addition if $r$ is a power of the characteristic of $K$, because $f^r \bmod y^sp(y^r) \in K[x^r,y^r]$ in that case. But Theorem \ref{C+p} does not hold if $r$ has a factor $u$ such that $r/u-1$ is a multiple of the characteristic of $K$. A counterexample can be constructed by taking $f = -(y^u+y^{2u}+\cdots+y^{r-u})$, using the fact that $y^r - 1 \mid f^2 - f$ under the given conditions.
\section{Proof of Theorem~\ref{thmain}}
Let $A$ be a $K$-algebra. By a $K$-derivation of $A$ we mean a $K$-linear map $D:A\rightarrow A$ satisfying \begin{equation} D(ab)=aD(b)+D(a)b ~\text{for all $a,b\in A$.} \label{Leibniz} \end{equation} By a $K$-$\mathcal{E}$-derivation of $A$ we mean a $K$-linear map $\delta:A\rightarrow A$ such that $\phi = {\rm id}-\delta$ is an algebra endomorphism of $A$ (see~\cite{zhao2018}). Notice that for such $\delta$, \begin{equation} \delta(ab) = \delta(a)b+ a\delta(b) - \delta(a)\delta(b) ~\text{for all $a,b \in A$.} \label{ELeibniz} \end{equation} In literature, $\mathcal{E}$-derivations are also called skew derivations or $\phi$-derivations (with $\phi={\rm id}-\delta$) (see~\cite{Bresar2002,Kharchenko1992}).
Without causing misunderstanding, we will write $\mathcal{E}$-derivations instead of $K$-$\mathcal{E}$-derivations.
An $\mathcal{E}$-derivation $\delta$ of $A$ is locally finite if and only if the associated endomorphism $\phi={\rm id}-\delta$ is locally finite, because $\spn_K\{f,\delta(f),\delta^2(f),\ldots,\delta^j(f)\} = \spn_K\{f,\phi(f),\phi^2(f),\ldots,\phi^j(f)\}$ for all $f \in A$ and all $j \in \mathbb{N}$.
Let $D$ be a derivation or $\mathcal{E}$-derivation of $K[X]$. If we combine the $K$-linearity of $D$ with \eqref{Leibniz} or \eqref{ELeibniz} respectively, then we can infer that $D$ is uniquely determined by $D(x_1),D(x_2),\ldots,D(x_n)$, and that $D$ is locally finite if and only if $\spn_{K}\{x_{i},D(x_{i}),D^{2}(x_{i}),\ldots\}$ is finite dimensional for all $i=1,2,\ldots,n$.
\begin{lemma}\label{lemext} Let $L\subseteq K$ be a field extension, $D$ a $K$-$\mathcal{E} $-derivation (resp.\@ $K$-derivation) of $K[X]$ and $D_{L}$ an $L$-$\mathcal{E} $-derivation (resp.\@ $L$-derivation) of $L[X]$ such that $D(x_{i})=D_{L}(x_{i}) $ for $i=1,2,\ldots,n$. \begin{enumerate} \item $D$ is locally finite if and only if $D_{L}$ is locally finite;
\item If $\IM D$ is a Mathieu subspace of $K[X]$ then $\IM D_{L}$ is a Mathieu subspace of $L[X]$. \end{enumerate} \end{lemma}
\begin{proof} We may assume that $K[X]=K\otimes_{L}L[X]$. Then $D={\rm id}_{K}\otimes_{L} D_{L}$ and $\IM D=K\otimes_{L}\IM D_{L}$. It follows that \[ \spn_{K}\{x_{i},D(x_{i}),D^{2}(x_{i}),\dots\}=K\otimes_{L}\spn_{L} \{x_{i},D_{L}(x_{i}),D_{L}^{2}(x_{i}),\dots\} \] for $i=1,2,\ldots,n$. Thus $D$ is locally finite if and only if $D_{L}$ is locally finite, and so (1) follows. (2) follows from Lemma~\ref{lemtensor} below. \end{proof}
\begin{lemma} \,\cite[Lemma 2.5]{vanden2010}\label{lemtensor} Let $L\subseteq K$ be a field extension, $A$ an algebra over $L$, and $M$ an $L$-subspace of $A$. Assume that $K\otimes_{L}M$ is a Mathieu subspace of the $K$-algebra $K\otimes_{L}A$. Then $M$ is a Mathieu subspace of the $L$-algebra $A$. \end{lemma}
In~\cite{vanden2018} the LFED conjecture of a $K$-algebra is reduced to the case of a $\bar K$-algebra, where $\bar{K}$ is an algebraic closure of $K$. We now reduce the LFED conjecture of $K[X]$ to the case of $\mathbb{C}[X]$.
\begin{lemma}\label{LFEDC} If the LFED conjecture holds for $\mathbb{C}[X]$, then it holds for $K[X]$ over any field $K$ of characteristic $0$. \end{lemma}
\begin{proof} We prove this lemma by contraposition. Assume that $D$ is a counterexample to the LFED conjecture for $K[X]$. Then $D$ is either a locally finite $K$-derivation or a locally finite $K$-$\mathcal{E}$-derivation of $K[X]$, and $\IM D$ is not a Mathieu subspace of $K[X]$. So there exist $f,g\in K[X]$ and positive integers $m_{1}<m_{2}<\cdots$ such that $f^{m}\in\IM D$ for all $m>0$ and $f^{m_{i}}g\notin\IM D$ for all $i=1,2,\ldots$.
Let $L$ be the subfield of $K$ generated by the coefficients of $f,g$ and $D(x_{i})$ for $i=1,2,\ldots,n$. Then the restriction of $D$ to $L[X]$, denoted by $D_{L}$, is an $L$-($\mathcal{E}$\discretionary{-)}{}{-)}derivation of $L[X]$ since $D(x_{i})\in L[X]$. By the supposition, $\IM D_{L}$ is not a Mathieu subspace of $L[X]$.
Since $L$ is a finitely generated extension of $\mathbb{Q}$, there exist a subfield $L^{\prime}$ of $\mathbb{C}$ and an isomorphism $\sigma:L\to L^{\prime}$ by~\cite[Lemma 1.1.13]{essenbk}. The field isomorphism $\sigma$ can be extended to a ring isomorphism from $L[X]$ to $L^{\prime}[X]$ in a natural way, still denoted by $\sigma$, which is a semi-linear mapping relative to $\sigma$. It follows that $D_{L^{\prime}}:=\sigma D_{L}\sigma ^{-1}$ is an $L^{\prime}$-($\mathcal{E}$\discretionary{-)}{}{-)}derivation of $L^{\prime}[X]$ and $\IM D_{L^{\prime}}$ is not a Mathieu subspace of $L^{\prime}[X]$.
There exists a unique $\mathbb{C}$-($\mathcal{E}$\discretionary{-)}{}{-)}derivation of $\mathbb{C}[X]$, denoted by $D_{\mathbb{C}}$, such that $D_{\mathbb{C}}(x_{i})=D_{L^{\prime}}(x_{i})$ for $i=1,2,\ldots,n$. By Lemma~\ref{lemext}, $D_{\mathbb{C}}$ is locally finite just like $D$, and $\IM D_{\mathbb{C}}$ is not a Mathieu subspace of $\mathbb{C}[X]$. So the LFED conjecture does not hold for $\mathbb{C}[X]$.
\end{proof}
We conclude with the proof of Theorem~\ref{thmain}.
\begin{proof} [Proof of Theorem~\ref{thmain}]By Lemma~\ref{LFEDC}, we may assume $K=\mathbb{C}$. Let $\delta={\rm id}-\phi$, where $\phi$ is an endomorphism of $\mathbb{C}[x,y]$. The proof splits into seven cases according to Theorem~\ref{lem4forms}, due to the fact that $\IM\delta$ is a Mathieu subspace if and only if so is the image of $\sigma^{-1}\delta\sigma={\rm id}-\sigma^{-1}\phi\sigma$ for any automorphism $\sigma$ of $\mathbb{C}[x,y]$.
Case (\ref{lem4forms1}). This case follows from~\cite[Corollary 4.4]{tian}.
Case (\ref{lem4forms2}). Since $1=\delta(-y)\in\IM\delta$, we can see that $\IM\delta$ is a Mathieu subspace of $\mathbb{C}[x,y]$ by~\cite[Proposition 1.4]{zhao20181}.
Case (\ref{lem4forms3}). For all $m,n\in\mathbb{N}$, we have \[ \delta(x^{m}y^{n})=(1-b^{ms+n})x^{m}y^{n}-\sum_{i=1}^{m}\binom{m}{i} a^{i}b^{(m-i)s+n}x^{m-i}y^{is+n}. \] As $\delta(1) = 0$ and $s > 0$, $\IM\delta \subseteq \langle x,y \rangle$ follows.
Assume that $m \ne 0$ or $n \ne 0$. Then $b^{ms+n}\neq1$, since $ms + n > 0$ and $b$ is not a root of unity. Fix the lex order in $\mathbb{C}[x,y]$. Then \[ \LT(\delta(x^{m}y^{n}))=(1-b^{ms+n})x^{m}y^{n}. \] Since $\delta(\langle x,y \rangle)\subseteq\langle x,y \rangle$, by Lemma~\ref{lemmaeta} $\IM\delta=\langle x,y \rangle$. Thus $\IM\delta$ is a Mathieu subspace of $\mathbb{C}[x,y]$.
Case (\ref{lem4forms4}). This case follows from Corollary~\ref{lastform} and Theorem~\ref{C+p}.
Case (\ref{lem4forms5}). This case follows from~\cite[Proposition 6.8]{zhao2018}.
Case (\ref{lem4forms6}). For $m,n\in\mathbb{N}$, not both zero, we have \begin{numcases}{\delta(x^{m}y^{n})=}
x^{m}y^{n}, & if $n>0$,\label{eq1}\\
(1-\lambda^{m}) x^{m}+yf_{m}, & if $m>0$ and $n=0$,\label{eqnumcase2}
\end{numcases} for some $f_m\in \mathbb{C}[x,y]$. As $\delta(1) = 0$, $\IM\delta \subseteq \langle x,y \rangle$ follows. By (\ref{eq1}), we have \begin{equation}\label{eqyinim}
\langle y \rangle \subseteq\IM\delta. \end{equation}
Suppose first that $\lambda$ is not a root of unity. Then (\ref{eqnumcase2}) and (\ref{eqyinim}) yield $x^{m}\in\IM\delta$ for all $m>0$. Hence we get $\IM\delta=\langle x,y \rangle$. In particular, $\IM\delta$ is a Mathieu subspace of $\mathbb{C}[x,y]$.
Suppose next that $\lambda$ is an $r$th root of unity for some $r\in\mathbb{N}^*$. By induction on $i$, it follows that $\phi^i(x) = \lambda^i x + \lambda^{i-1}yg$ and $\phi^i(y) = 0$ for all $i \ge 1$. Hence $\phi^{r+1}(x) = \lambda x + y g$, and therefore $\phi^{r+1} = \phi$. So we can apply~\cite[Proposition 6.8]{zhao2018} again.
Case (\ref{lem4forms7}). Since \begin{align*} \delta\big({-\lambda^{-1}(x+yg)}\big) &= -\lambda^{-1}(x+yg) + \lambda^{-1}\big(\phi(x)+\phi(y)\phi(g)\big) \\ &= -\lambda^{-1}(x+yg) + \lambda^{-1}(x+\lambda+yg) = 1, \end{align*} $\IM\delta$ is a Mathieu subspace of $\mathbb{C}[x,y]$ by~\cite[Proposition 1.4]{zhao20181}. \end{proof}
\end{document} | arXiv |
Tornike's Portfolio
Structural Scaffolds for Citation Intent Classification in Scientific Publications
This post is a paper summary highlighting the main ideas of the paper "Structural Scaffolds for Citation Intent Classification in Scientific Publications" by Cohan et al. (2019). arXiv Github
Machine reading and automated analysis of scientific literature have increasingly become important due to information overload. Citations are typically used to measure the impact of scientific publications (Li and Ho, 2008)1. Citation Intent Classification is the task of identifying why an author cited another paper. The automatic identification of citation intent could also help users in doing research. FIGURE 1 shows an example of two citation intents. Some citations indicate direct use of a method, while others may acknowledge prior work or compare methods or results. Existing models are based on hand-engineered features, which may not sufficiently model signals in the text (e.g. linguistic patterns or cue phrases). Recent advances in Natural Language Processing (NLP) have introduced large, contextual representations that are obtained from textual data without the need for manual feature engineering. Cohan et al. (2019)2 introduce a novel framework to include structural knowledge into citations as well as a new dataset of citation intents: SciCite.
Citation intent example. Source: Cohan et al. (2019)
SciCite is five times larger, contains fewer but more general categories, and covers scientific literature from more general domains than existing datasets such as ACL-ARC (Jurgens et al., 2018)3. FIGURE 2 compares the datasets. The papers for SciCite were sampled from the Semantic Scholar corpus. The authors chose more general categories as some are very rare and would not have been enough for training. Citations were extracted using science-parse. The ACL-ARC dataset, which consists of Computational Linguistics papers, was annotated by domain experts in NLP. The training set for SciCite was crowdsourced using the Figure Eight platform, while the test set was annotated by an expert annotator.
SciCite vs ACL-ARC. Source: Cohan et al. (2019)
The proposed neural multitask framework consists of a main task (Citation intent) and two two structural scaffolds, or auxiliary tasks: section title and citation worthiness. The input $\textbf{x}$ is the set of tokens in the citation context, which are encoded by concatenating non-contextual word representations GloVe (Pennington et al., 2014)4 with contextualized embeddings ELMo (Peters et al., 2018)5 (Eq. 1).
$$\textbf{x}_i = [\textbf{x}_i^{\text {GloVe}}; \mathbf{x}_i^{\text {ELMo}}] \tag{1}$$
The encoded tokens then get fed into a bidirectional long short-term memory (Hochreiter and Schmidhuber, 1997)6 network with hidden size $d_2$, which results in the contextual representation of each token w.r.t. the entire sequence (Eq. 2).
$$\mathbf{h}_{i}=[\overrightarrow{\operatorname{LSTM}}(\mathbf{x}, i) ; \overleftarrow{\operatorname{LSTM}}(\mathbf{x}, i)] \tag{2}$$
Finally, an attention mechanism is added, which produces a vector representation of the input sequence (Eq. 3). $\textbf{w}$ is a parameter served as the query vector for dot-product attention.
$$\mathbf{z}=\sum_{i=1}^{n} \alpha_{i} \mathbf{h}_{i}, \quad \alpha_{i}=\operatorname{softmax}\left(\mathbf{w}^{\top} \mathbf{h}_{i}\right) \tag{3}$$
Structural Scaffolds
The citation worthiness task is to predict whether a sentence needs a citation. The hypothesis is that language in sentences with citations is different from regular sentences in scientific work. Sentences with citations are positive samples and sentences without citation markers are negative samples. This task could also be used in a different setting (e.g. paper draft aid).
The section title task is to predict the title of the section in which the citation appears. The hypothesis here is that citation intent is relevant to its section. Contrary to the other tasks, the authors use a large number of scientific papers to generate the training data for this task.
In the multitask framework, a Multi-Layer Perceptron (MLP) followed by a softmax layer is used for each task (Eq. 4). The class with the highest probability is then chosen.
$$\mathbf{y}^{(i)}=\operatorname{softmax}\left(\mathrm{MLP}^{(i)}(\mathbf{z})\right) \tag{4}$$
FIGURE 3 shows an overview of the proposed model.
Model overview. Source: Cohan et al. (2019)
For the citation worthiness task, citation markers were removed in order for the model to not "cheat" by simply recognizing citations in sentences. For sentence title, citations and their contexts were sampled from the corpus. Section titles were normalized with regular expressions to the following general categories: introduction, related work, method, and experiments. Titles that did not map to these were removed. The table below shows the total number of instances for each of the datasets and tasks.
ACL-ARC
SciCite
Citation worthiness 50k 73k
Sentence title 47k 90k
The proposed model, trained with standard hyperparameters, is compared to a strong baseline and a state-of-the-art model. The first baseline is a BiLSTM with an attention mechanism (with and without using ELMo embeddings) that only optimizes for the citation intent classification task. This is meant to show if the structural scaffolds and contextual embeddings, in fact, improve performance. The second baseline is the model used by Jurgens et al. (2018)3, which had the best-reported results on ACL-ARC. The authors incorporate a diverse set of features (e.g. pattern-based, topic-based, prototypical argument) and train a Random Forest classifier.
The results on both the ACL-ARC (FIGURE 4) and SciCite (FIGURE 5) datasets indicate that the inclusion of structural scaffolds improves performance on all of the baselines. The performance differences for the respective datasets is partly due to the different dataset sizes. Each auxiliary task contributes slightly over the baseline, while the combination of both tasks shows a large improvement on ACL-ARC and a marginal improvement on SciCite. The addition of contextual embeddings further increases performance by about 5% macro F1 on both datasets (including on the baselines).
Results on ACL-ARC. Source: Cohan et al. (2019)
Results on SciCite. Source: Cohan et al. (2019)
Figure 6 shows an example sentence from ACL-ARC for which the correct label is Future Work. The best-proposed model predicts this correctly, attending over more of the context, while the baseline predicts Compare. The attention is stronger on "compare" for the baseline, ignoring the context of its use.
Example from ACL-ARC. Source: Cohan et al. (2019)
When looking at each of the categories independently, categories with more instances show higher F1 scores on both datasets (FIGURE 7 and 8). Recall seems to suffer from a limited number of training instances.
Per category classification results on ACL-ARC. Source: Cohan et al. (2019)
Because the categories in SciCite are more general, there are more training instances for each. The recall on this dataset is accordingly higher.
Per category classification results on SciCite. Source: Cohan et al. (2019)
Cohan et al. (2019)2 show that structural properties of scientific literature can be useful for citation intent classification. The authors argue that relevant auxiliary tasks can help improve performance in multitask learning. The main contributions of this work are the following:
A new scaffold framework for citation intent classification.
A new state-of-the-art of 67.9% F1 on ACL-ARC (an increase of 13.3%).
A new dataset, SciCite, of citation intents, which is 5x the size of current datasets.
While the current work uses ELMo, Beltagy et al. (2019)7 show that incorporating BERT, a large pre-trained language model, which they fine-tuned on scientific data, increases performance further. A possible extension could be to adapt the model to other domains (e.g. Wikipedia).
SciBERT: A Pretrained Language Model for Scientific Text - Beltagy et al., 2019
ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing - Neumann et al., 2019
Thanks to Elvis for the review.
Zhi Li and Yuh-Shan Ho. 2008. Use of citation per publication as an indicator to evaluate contingent valuation research. Scientometrics. ↩︎
Arman Cohan, Waleed Ammar, Madeleine van Zuylen, and Field Cady. 2019. Structural scaffolds for citation intent classification in scientific publications. In NAACL-HLT, pages 3586–3596, Minneapolis, Minnesota. Association for Computational Linguistics. ↩︎
David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. TACL, 6:391–406. ↩︎
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. ↩︎
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. ↩︎
Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation. ↩︎
Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. SciBERT: A Pretrained Language Model for Scientific Text. CoRR, abs/1903.10676. ↩︎
NLP Machine Learning Text Classification
Tornike Tsereteli
M.Sc. student in Compuational Linguistics
I am a M.Sc. Computational Linguistics student at the University of Stuttgart. I work on Natural Language Processing and Machine Learning. My research interests are Transfer Learning, Ethics, Explainability and Privacy.
Rick and Morty Story Generator
humAIne | CommonCrawl |
Fundam. Prikl. Mat.:
Fundam. Prikl. Mat., 2005, Volume 11, Issue 6, Pages 195–208 (Mi fpm893)
This article is cited in 5 scientific papers (total in 5 papers)
On Gauss–Kuz'min statistics for finite continued fractions
A. V. Ustinov
M. V. Lomonosov Moscow State University
Abstract: The article is devoted to finite continued fractions for numbers $a/b$ when integer points $(a,b)$ are taken from a dilative region. Properties similar to the Gauss–Kuz'min statistics are proved for these continued fractions.
Journal of Mathematical Sciences (New York), 2007, 146:2, 5771–5781
UDC: 511.37+511.336
Citation: A. V. Ustinov, "On Gauss–Kuz'min statistics for finite continued fractions", Fundam. Prikl. Mat., 11:6 (2005), 195–208; J. Math. Sci., 146:2 (2007), 5771–5781
\Bibitem{Ust05}
\by A.~V.~Ustinov
\paper On Gauss--Kuz'min statistics for finite continued fractions
\jour Fundam. Prikl. Mat.
\mathnet{http://mi.mathnet.ru/fpm893}
\jour J. Math. Sci.
\crossref{https://doi.org/10.1007/s10958-007-0391-z}
http://mi.mathnet.ru/eng/fpm893
http://mi.mathnet.ru/eng/fpm/v11/i6/p195
A. V. Ustinov, "Calculation of the variance in a problem in the theory of continued fractions", Sb. Math., 198:6 (2007), 887–907
A. V. Ustinov, "Asymptotic behaviour of the first and second moments for the number of steps in the Euclidean algorithm", Izv. Math., 72:5 (2008), 1023–1059
A. V. Ustinov, "Spin chains and Arnold's problem on the Gauss-Kuz'min statistics for quadratic irrationals", Sb. Math., 204:5 (2013), 762–779
A. V. Ustinov, "Three-dimensional continued fractions and Kloosterman sums", Russian Math. Surveys, 70:3 (2015), 483–556
Deroin B., Kleptsyn V., Navas A., "On the Ergodic Theory of Free Group Actions By Real-Analytic Circle Diffeomorphisms", Invent. Math., 212:3 (2018), 731–779
Full text: 103
First page: 1 | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Multistable L\'{e}vy motions and their continuous approximations} \author{Xiequan Fan\ \ \ \ \ \ Jacques L\'{e}vy V\'{e}hel$\,^*$}
\cortext[cor1]{\noindent Corresponding author. \\ \mbox{\ \ \ \ }\textit{E-mail}: [email protected] (X. Fan), \ \ \ \ \ [email protected] (J. L\'{e}vy V\'{e}hel). } \address{Regularity Team, Inria and MAS Laboratory, Ecole Centrale Paris - Grande Voie des Vignes,\\ 92295 Ch\^{a}tenay-Malabry, France}
\begin{abstract} Multistable L\'{e}vy motions are extensions of L\'{e}vy motions where the stability index is allowed to vary in time. Several constructions of these processes have been introduced recently, based on Poisson and Ferguson-Klass-LePage series representations and on multistable measures. In this work, we prove a functional central limit theorem for the independent-increments multistable L\'{e}vy motion, as well as of integrals with respect to these processes, using weighted sums of independent random variables. This allows us to construct continuous approximations of multistable L\'{e}vy motions. In particular, we prove that multistable L\'{e}vy motions are stochastic H\"{o}lder continuous and strongly localisable. \end{abstract}
\begin{keyword} (strong) localisability; multistable process; stochastic H\"{o}lder continuous; stable process; continuous approximation.
\MSC Primary 60G18, 60G17; Secondary 60G51, 60G52. \end{keyword}
\end{frontmatter}
\section{Introduction} Recall that a stochastic process $\{L(t), t \geq 0\}$ is called (standard) $\alpha-$stable L\'{e}vy motion if the following three conditions hold:\\ (C1) $L(0)=0$ almost surely;\\ (C2) $L$ has independent increments;\\ (C3) $L(t)-L(s)\sim S_\alpha( (t-s)^{1/\alpha}, \beta, 0)$ for any $0\leq s < t$ and for some $0< \alpha \leq 2, -1\leq \beta \leq 1$. Here $S_\alpha(\sigma, \beta, 0)$ stands for a stable random variable with index of stability $\alpha$, scale parameter $\sigma$, skewness parameter $\beta$ and shift parameter equal to 0. Recall that $\alpha$ governs the intensity of jumps.
Such processes have stationary increments, and they are $1/\alpha-$self-similar, that is, for all $c>0,$ the processes $\{L(c\,t), t\geq 0\}$ and $\{c^{1/\alpha}L(t), t\geq 0\}$ have the same finite-dimensional distributions. An $\alpha-$stable L\'{e}vy motion is symmetric when $\beta=0$. Stable L\'{e}vy motions, and, more generally, stable processes have been the subject of intense activity in recent years, both on the theoretical side (see, e.g. \cite{ST94}) and in applications \cite{N12}. However, the stationary property of their increments restricts their use in some situations, and generalizations are needed for instance to model real-world phenomena such as financial records, epileptic episodes in EEG or internet traffic. A significant feature in these cases is that the ``local intensity of jumps'' varies with time $t$. A way to deal with such a variation is set up a class of processes whose stability index $\alpha$ is a function of $t$. More precisely, one aims at defining non-stationary increments processes which are, at each time $t$, ``tangent'' (in a certain sense explained below) to a stable process with stability index $\alpha(t)$.
Formally, one says that a stochastic process $\{X(t), t \in [0, 1]\}$ is {\it multistable} \cite{FL09} if, for almost all $t\in [0,1)$, $X$ is \emph{localisable} at $t$ with tangent process $X_t '$ an $\alpha(t)-$stable process. Recall that $\{X(t), t \in [0, 1]\}$ is said to be $h-$localisable at $t$ (cf.\ \cite{F,F2}), with $h>0$, if there exists a non-trivial process $X_t '$, called the tangent process of $X$ at $t$, such that \begin{eqnarray}\label{indis} \lim_{r\searrow 0} \frac{X(t+ru)-X(t)}{r^h} = X_t'(u), \end{eqnarray} where convergence is in finite dimensional distributions.
Let $D[0, 1]$ be the set of c\`{a}dl\`{a}g functions on $[0, 1],$ that is functions which are continuous on the right and have left limits at all $t \in [0, 1],$ endowed with the Skorohod metric $d_S$ \cite{B68}. If $X$ and $X_t'$ have versions in $D[0, 1]$ and convergence in (\ref{indis}) is in distribution with respect to $d_S$, one says that $X$ is $h-$\emph{strongly localisable} at $t$ with strong local form $X_t'$.
In this work, we will be concerned with the simplest non-trivial multistable processes, namely multistable L\'evy motions (MsLM), which are non-stationary increments extensions of stable L\'{e}vy motions. Two such extensions exist \cite{FL09,FL12}: \begin{enumerate} \item The {\it field-based} MsLM admit the following series representation: \begin{equation} \label{FBP} L_{F}(t) = C_{ \alpha(t)}^{1/ \alpha(t)}\sum_{(\X,\Y)\in\Pi} {\mathbf{1}}_{[0,t]}(\X)\Y^{<-1/\alpha(t)>} \quad \quad (t \in [0,T]), \end{equation} where $\Pi$ is a Poisson point process on $[0,1] \times \mathbb{R}$ with mean measure the Lebesgue measure $\mathcal{L}$,
$a^{<b>} := \mbox{sign}(a)|a|^{b}$ and \begin{equation}\label{calphax} C_{u}= \left( \int_{0}^{\infty} x^{-u} \sin (x)dx \right)^{-1}. \end{equation} Their joint characteristic function reads: \begin{equation}\label{FBCF} \mathbb{E} \exp \left\{i \sum_{j=1}^{m}\limits \theta_j L_{F}(t_j) \right\} = \exp \left\{ -2 \int_{[0,T]} \int_{0}^{+ \infty} \sin^2\Bigg( \sum_{j=1}^{m} \theta_j \frac{C_{\alpha(t_j)}^{1/\alpha(t_j)}}{2y^{1/\alpha(t_j)}} \mathbf{1}_{[0,t_j]}(x) \Bigg)\hspace{0.1cm} dy \hspace{0.1cm} dx \right\} \end{equation} for $ d \in \mathbb{N}, (\theta_1, \ldots, \theta_d) \in \mathbb{R}^d$ and $(t_1, \ldots , t_d) \in \mathbb{R}^d.$ These processes have correlated increments, and they are localisable as soon as the function $\alpha$ is H\"{o}lder-continuous. \item The {\it independent-increments} MsLM admit the following series representation: \begin{equation}\label{PRLII} L_{I}(t)=\sum_{(\X,\Y)\in \Pi}C_{\alpha(\X)}^{1/ \alpha(\X)}\mathbf{1}_{[0,t]}(\X)\Y^{<-1/\alpha(\X)>} \quad \quad (t \in [0,T]). \end{equation} As their name indicates, they have independent increments, and their joint characteristic function reads: \begin{eqnarray}
\mathbb{E} \exp\left\{ i \sum_{j=1}^d \theta_j L_{I }(t_j) \right\} =\exp \left\{- \int \Big| \sum_{j=1}^d \theta_j \mathbf{1}_{[0, \ t_j] }(s)\Big|^{\alpha(s)} ds \right\}, \label{hjdns} \end{eqnarray} for $ d \in \mathbb{N}, (\theta_1, \ldots, \theta_d) \in \mathbb{R}^d$ and $(t_1, \ldots , t_d) \in \mathbb{R}^d.$ These processes are localisable as soon as the function $\alpha$ verifies: \begin{eqnarray}\label{coalph} \Big(\alpha (x)-\alpha (x+t) \Big)\ln t \rightarrow 0 \end{eqnarray} uniformly for all $x$ in finite interval as $t\searrow0$ \cite{FL12}. \end{enumerate} Of course, when $\alpha(t)$ is a constant $\alpha$ for all $t$, both $L_{F}$ and $L_I$ are simply the Poisson representation of $\alpha-$stable L\'evy motion, that we denote by $L_{\alpha}$. In general, $L_F$ and $L_I$ are semi-martingales \cite{GLL13b}. For more properties of $L_F$, such as Ferguson-Klass-LePage series representations and H\"{o}lder exponents, we refer to \cite{FLV09,GLL12,GLL13a}.
In this paper, we prove a functional central limit theorem for independent-increments MsLM: we show that certain weighted sums of independent random variables converge in $(D[0,1],d_S)$ to $L_I.$ This allows us to obtain strong localisability of these processes. Moreover, we establish continuous approximations of MsLM and an alternative representation for the integrals of multistable L\'{e}vy measure.
Some properties of the integrals of multistable L\'{e}vy measure are investigated. In particular, we prove that MsLM are stochastic H\"{o}lder continuous and strongly localisable.
The paper is organized as follows. In Section \ref{secmlevy}, we present the functional central limit theorem for independent-increments MsLM. In Section \ref{contapp}, we establish continuous approximations of MsLM. In the last section, we give a representation of MsLM and investigate some properties, including stochastic H\"{o}lder continuous and strongly localisable, of the integrals of multistable L\'{e}vy measure.
\section{Functional Central Limit Theorems for Multistable L\'{e}vy Motions}\label{secmlevy} We show in this section how to approximate the independent-increments MsLM in law by weighted sums of independent random variables. \begin{theorem}\label{fhnk} Let $(\alpha_n(u))_n,\alpha(u), u \in [0,1],$ be a class of c\`{a}dl\`{a}g functions ranging in $[a,b] \subset (0,2]$ such that the sequence $(\alpha)_n$ tends to $\alpha$ in the uniform metric. Let $\big( X(k,n) \big)_{n \in \mathbb{N}, \ k=1,...,2^n} $ be a family of independent and symmetric $\alpha_n(\frac{k}{2^n})-$stable random variables with unit scale parameter, i.e., $X(k,n) \sim S_{\alpha_n(\frac{k}{2^n}) }(1, 0 , 0 )$. Then \begin{itemize} \item the sequence of processes \begin{eqnarray}\label{multilevyI} L^{(n)}_{I}(u) = \sum_{k=1}^{\lfloor 2^n u \rfloor} \Big(\frac{1}{2^n}\Big)^{1/\alpha_n(\frac{k}{2^n})} X(k,n) , \ \ \ \ \ \ \ \ \ \ u \in [0,1], \end{eqnarray} tends in distribution to $L_I(u)$ in $(D[0,1],d_S),$ where $\lfloor x \rfloor$ is the largest integer smaller than or equal to $x$. In particular, if $\alpha$ satisfies condition (\ref{coalph}), then $L_I(u)$ is localisable at all times. \item the sequence of processes \begin{eqnarray}\label{multilevyII} L^{(n)}_{R}(u) = \sum_{k=1}^{\Gamma_{\lfloor 2^n u \rfloor}} \Big(\frac{1}{2^n}\Big)^{1/\alpha_n(\frac{k}{2^n})} X(k,n) , \ \ \ \ \ \ \ \ \ \ u \in [0,1], \end{eqnarray} tends in distribution to $L_I(u)$ in $(D[0,1],d_S),$ where $(\Gamma_i)_{i\geq 1}$ is a sequence of arrival times of a Poisson process with unit arrival rate and is independent of $\big( X(k,n) \big)_{n \in \mathbb{N}, \ k=1,...,2^n}$. \item the sequence of processes \begin{eqnarray}\label{multilevyC} L^{(n)}_{C}(u) = \sum_{k=1}^{\lfloor 2^n u \rfloor} \Big(\frac{1}{\Gamma_{2^n}}\Big)^{1/\alpha_n(\frac{k}{2^n})} X(k,n) , \ \ \ \ \ \ \ \ \ \ u \in [0,1], \end{eqnarray} tends in distribution to $L_I(u)$ in $(D[0,1],d_S).$ \end{itemize} \end{theorem} \noindent\emph{Proof.} We prove the first claim by the following three steps.
First, we prove that $L^{(n)}_{I}(u) $ converges to $L_{I}(u)$ in finite dimensional distribution. For any $u_1,u_2 \in [0,1]$ and $u_2> u_1,$ we have, for any $\theta \in \mathbb{R},$ \begin{eqnarray} \lim_{n \rightarrow \infty} \mathbb{E} e^{i\theta \big(L^{(n)}_{I}( u_2) - L^{(n)}_{I}(u_1) \big ) }
&=& \lim_{n \rightarrow \infty} \exp\left\{ -\sum_{k= \lfloor2^nu_{1}\rfloor+1}^{\lfloor2^nu_{2}\rfloor} \frac{1}{\ 2^n} |\theta|^{\alpha_n(\frac{k }{2^n})} \right\}. \label{chfunction} \end{eqnarray} Notice that \begin{eqnarray}
\sum_{k= \lfloor2^nu_{1}\rfloor+1}^{\lfloor2^nu_{2}\rfloor} \frac{1}{\ 2^n} \Big| |\theta|^{\alpha_n(\frac{k }{2^n})} -|\theta|^{\alpha (\frac{k }{2^n})} \Big| &\leq& |\theta|^\tau \log |\theta| \sum_{k= \lfloor2^nu_{1}\rfloor+1}^{\lfloor2^nu_{2}\rfloor} \frac{1}{\ 2^n} \Big| \alpha_n\Big(\frac{k }{2^n}\Big) -\alpha \Big(\frac{k }{2^n}\Big) \Big| \nonumber \\
&\leq& \Big|\Big| \alpha_n(\cdot) -\alpha (\cdot) \Big|\Big|_{\infty} |\theta|^\tau \log |\theta|
\frac{\lfloor2^nu_{2}\rfloor - \lfloor2^nu_{1}\rfloor }{\ 2^n} , \label{gbsd23} \end{eqnarray}
where $\tau=a\mathbf{1}_{[0,\, 1 )}(|\theta|)+ b\mathbf{1}_{[1,\, \infty)} (|\theta|).$ By hypothesis, we have $$ \lim_{n\rightarrow \infty} || \alpha_n(\cdot) -\alpha (\cdot) ||_{\infty}=0.$$ Thus inequality (\ref{gbsd23}) implies that \begin{eqnarray*}
\lim_{n \rightarrow \infty} \sum_{k= \lfloor2^nu_{1}\rfloor+1}^{\lfloor2^nu_{2}\rfloor} \frac{1}{\ 2^n} |\theta|^{\alpha_n(\frac{k }{2^n})}& = &
\lim_{n \rightarrow \infty} \sum_{k= \lfloor2^nu_{1}\rfloor+1}^{\lfloor2^nu_{2}\rfloor} \frac{1}{\ 2^n} |\theta|^{\alpha(\frac{k }{2^n})} \\
& = & \int_ { u_1}^ {u_2 } |\theta|^{\alpha (s)} ds . \end{eqnarray*} From (\ref{chfunction}), it follows that \begin{eqnarray} \lim_{n \rightarrow \infty} \mathbb{E} e^{i\theta \big(L^{(n)}_{I}( u_2) - L^{(n)}_{I}(u_1) \big ) }
&=& \exp\left\{ - \int_ { u_1}^ {u_2 } |\theta|^{\alpha (s)} ds \right\}. \label{hjkls} \end{eqnarray} Hence $ L^{(n)}_{I}( u_2) - L^{(n)}_{I}(u_1) $ converges in distribution and the characteristic function of its limit is defined by (\ref{hjkls}). Since $L^{(n)}_{I}(u) $ has independent increments, the limit of $L^{(n)}_{I}( u)$ has the joint characteristic function (\ref{hjdns}), i.e., $L^{(n)}_{I}( u)$ converges to $L_{I}( u)$ in finite dimensional distribution.
Second, we prove that $L^{(n)}_{I}( u)$ converges to $L_{I}( u)$ in $(D[0,1],d_S)$. By Theorem 15.6 of Billingsley \cite{B68}, it suffices to show that \begin{eqnarray}\label{sfffdf}
\mathbb{P}\Big( \Big| L^{(n)}_{I}(u)-L^{(n)}_{I}(u_1)\Big|\geq \lambda,\ \Big|L^{(n)}_{I}(u_2)-L^{(n)}_{I}(u)\Big|\geq \lambda \Big) \leq \frac{C}{\lambda^{2\gamma } } \Big[ u_2 - u_1 \Big]^2 \end{eqnarray} for $u_1 \leq u \leq u_2, \lambda> 0$ and $n\geq 1$, where $\gamma = a\mathbf{1}_{[2,\, \infty)} (\lambda)+ b\mathbf{1}_{(0,\, 2 )}(\lambda)$ and $C$ is a constant depending only on $a$ and $b.$ If $u_2-u_1 < 1/2^n$, then either $L^{(n)}_{I}(u_2)=L^{(n)}_{I}(u)$ or $ L^{(n)}_{I}(u)=L^{(n)}_{I}(u_1)$; in either of these cases the left side of (\ref{sfffdf}) vanished. Next, we consider the case of $u_2-u_1 \geq 1/2^n.$ Since $L^{(n)}_{I}(u)-L^{(n)}_{I}(u_1)$ and $L^{(n)}_{I}(u_2)-L^{(n)}_{I}(u)$ are independent, it follows that \begin{eqnarray*}
&&\mathbb{P}\Big( \Big| L^{(n)}_{I}(u)-L^{(n)}_{I}(u_1)\Big|\geq \lambda,\ \Big|L^{(n)}_{I}(u_2)-L^{(n)}_{I}(u)\Big|\geq \lambda \Big)=
\\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathbb{P}\Big( \Big| L^{(n)}_{I}(u)-L^{(n)}_{I}(u_1)\Big|\geq \lambda\Big) \ \mathbb{P}\Big(\Big|L^{(n)}_{I}(u_2)-L^{(n)}_{I}(u)\Big|\geq \lambda \Big). \end{eqnarray*} Then, by the Billingsley inequality (cf. p.\ 47 of \cite{B68}), it is easy to see that \begin{eqnarray*}
\mathbb{P}\Big( \Big| L^{(n)}_{I}(u)-L^{(n)}_{I}(u_1)\Big|\geq \lambda \Big) &\leq& \frac{\lambda}{2} \int_{-2/\lambda}^{2/\lambda } \Bigg( 1- \mathbb{E}e^{i\theta \big(L^{(n)}_{I}(u)-L^{(n)}_{I}(u_1) \big)}\Bigg) d\theta \\
&=& \frac{\lambda}{2} \int_{-2/\lambda}^{2/\lambda } \Bigg( 1- \exp\Bigg\{- \sum_{k=\lfloor2^nu_1\rfloor+1}^{\lfloor2^nu\rfloor} \frac{1}{2^n} |\theta |^{\alpha_n(\frac{k}{2^n})} \Bigg\}\Bigg) d\theta \\
&\leq& \frac{\lambda}{2} \int_{-2/\lambda}^{2/\lambda } \sum_{k=\lfloor2^nu_1\rfloor+1}^{\lfloor2^nu\rfloor} \frac{1}{2^n} |\theta |^{\alpha_n(\frac{k}{2^n})} d\theta \\
&\leq&\sum_{k=\lfloor2^nu_1\rfloor+1}^{\lfloor2^nu\rfloor} \frac{1}{2^n} \frac{\lambda}{2} \int_{-2/\lambda}^{2/\lambda } \Big| \theta\Big|^{\gamma} \, d\theta \\ &\leq&\frac{C_1}{\lambda^{\gamma} } \Bigg[ \frac{\lfloor2^nu\rfloor- \lfloor2^nu_1\rfloor}{2^n} \Bigg], \end{eqnarray*} where $C_1$ is a constant depending only on $a$ and $b.$
Similarly, it holds \begin{eqnarray}
\mathbb{P}\Big( \Big|L^{(n)}_{I}(u_2)-L^{(n)}_{I}(u)\Big|\geq \lambda \Big) \leq \frac{C_2}{\lambda^{\gamma} }\Bigg[ \frac{\lfloor2^n u_2\rfloor- \lfloor2^n u\rfloor}{2^n} \Bigg] , \end{eqnarray} where $C_2$ is a constant depending only on $a$ and $b$. Using the inequality $xy \leq (x+y)^2/4$ for all $x, y \geq0,$ we deduce \begin{eqnarray*}
&& \mathbb{P}\Big( \Big| L^{(n)}_{I}(u)-L^{(n)}_{I}(u_1)\Big|\geq \lambda,\ \Big|L^{(n)}_{I}(u_2)-L^{(n)}_{I}(u)\Big|\geq \lambda \Big) \\ &\leq& \frac{C_1C_2}{\lambda^{ 2\gamma } } \Bigg[ \frac{\lfloor2^nu\rfloor- \lfloor2^nu_1\rfloor}{2^n} \Bigg]\Bigg[ \frac{\lfloor2^nu_2\rfloor- \lfloor2^nu\rfloor}{2^n} \Bigg] \\ & \leq& \frac{C_1C_2}{4} \frac{1}{\lambda^{ 2\gamma } } \Bigg[ \frac{\lfloor2^nu_2\rfloor- \lfloor2^nu_1\rfloor}{2^n} \Bigg]^2\\ & \leq& C_1C_2 \frac{1}{\lambda^{ 2\gamma } } \Big[ u_2- u_1 \Big]^2, \end{eqnarray*} where the last line follows from the fact that \begin{eqnarray*}
\frac{\lfloor2^nu_2\rfloor- \lfloor2^nu_1\rfloor}{2^n}
\ \leq \ \frac{2^nu_2- 2^nu_1+ 1}{2^n} \ \leq\ 2\Big[ u_2- u_1 \Big]. \end{eqnarray*} This completes the proof of (\ref{sfffdf}).
Third, we prove that if $\alpha$ satisfies condition (\ref{coalph}), then $L_{I}( u)$ is localisable at all times. Falconer and Liu (cf.\ Theorem 2.7 of \cite{FL12}) have proved that the process $L_{I}( u)$, defined by the joint characteristic function (\ref{hjdns}), is localisable at $u$ to L\'{e}vy motions $L_{\alpha (u)}(\cdot)$ with the stability index $\alpha (u)$. Here we give another proof to complete our argument. For any $(t_1, ..., t_d) \in [0,1]^d,$ from equality (\ref{hjdns}), it is easy to see that \begin{eqnarray} && \mathbb{E} \exp\left\{ i \sum_{j=1}^d \theta_j \left( \frac{L_{I}(u+rt_j)- L_{I}(u )}{ r^{1/\alpha(u)} } \right) \right\} \nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\ \exp \left\{ - \int \Big| \sum_{j=1}^d \theta_jr^{-1/\alpha(u)} \mathbf{1}_{[u,\ u+rt_j] }(s)\Big|^{\alpha (s)} ds \right\}. \nonumber \end{eqnarray} Setting $s=u+rt$, we find that \begin{eqnarray} &&\mathbb{E} \exp\left\{ i \sum_{j=1}^d \theta_j \left( \frac{L_{I}(u+rt_j)- L_{I}(u )}{ r^{1/\alpha(u)} } \right) \right\} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\ \exp \left\{ - \int \Big| \sum_{j=1}^d \theta_j \mathbf{1}_{[0,\ t_j] }(t)\Big|^{\alpha (u+rt)} r^{(\alpha (u)-\alpha (u+rt))/\alpha (u)} dt \right\}. \nonumber \end{eqnarray} By condition (\ref{coalph}), it follows that \begin{eqnarray}\label{sdfdsdf} \lim_{r\searrow 0} r^{(\alpha (u)-\alpha (u+rt))/\alpha (u)}=1 \ \ \ \ \ \textrm{and} \ \ \ \ \ \lim_{r\searrow 0} \alpha(u+rt)=\alpha(u). \end{eqnarray} Hence, using dominated convergence theorem, we have \begin{eqnarray}
\lim_{r\searrow 0}\mathbb{E} \exp\left\{ i \sum_{j=1}^d \theta_j \left( \frac{L_{I}(u+rt_j)- L_{I}(u )}{ r^{1/\alpha(u)} } \right) \right\}
&=& \exp \left\{ - \int \Big| \sum_{j=1}^d \theta_j \mathbf{1}_{[0, \ t_j] }(t)\Big|^{\alpha (u)} dt \right\} \nonumber \\ &=& \mathbb{E} \exp\left\{ i \sum_{j=1}^d \theta_jL_{\alpha (u)}(t_j ) \right\} , \nonumber \end{eqnarray} which means that $L_{I}( u)$ is localisable at $u$ to an $\alpha (u)-$stable L\'{e}vy motion $L_{\alpha (u)} (t).$ This completes the proof of the first claim of the theorem.
Next, we prove the second claim of the theorem. For any $u_1,u_2 \in [0,1]$ and $u_2> u_1,$ it is easy to see that, for any $\theta \in \mathbb{R},$ \begin{eqnarray} \lim_{n \rightarrow \infty} \mathbb{E} e^{i\theta (L^{(n)}_{R}( u_2) - L^{(n)}_{R}(u_1) ) }
&=& \lim_{n \rightarrow \infty} \mathbb{E} \exp\left\{ -\sum_{k= \Gamma_{\lfloor2^nu_1\rfloor}+1}^{\Gamma_{\lfloor2^nu_2\rfloor}} \frac{1}{\ 2^n} |\theta|^{\alpha_n(\frac{k }{2^n})} \right\} \nonumber\\
&=& \exp\left\{ - \int_ { u_1}^ {u_2 } |\theta|^{\alpha (s)} ds \right\}, \end{eqnarray} where the last line follows from the weak law of large numbers. Notice that $L^{(n)}_{R}(u) $ also has independent increments. The rest of the proof of the second claim is similar to the proof of the first one. For this reason, we shall not carry it out.
In the sequel, we prove the third claim by the following two steps.
First, we prove that $L^{(n)}_{C}(u) $ converges to $L_{I}(u)$ in finite dimensional distribution. It is worth noting that $L^{(n)}_{C}$ does not have independent increments. This property implies that we cannot use the previous method. For any $(u_1, ..., u_d) \in [0,1]^d$ and any $(\theta_1, ..., \theta_d) \in \mathbb{R}^d$ such that $0=u_0\leq u_1\leq u_2 \leq ...\leq u_d,$ we have \begin{eqnarray} \lim_{n \rightarrow \infty} \mathbb{E} e^{i \sum_{j=1}^d \theta_j L^{(n)}_{C}( u_j) } &=& \lim_{n \rightarrow \infty} \mathbb{E} \exp\left\{ i \sum_{l=1}^d \sum_{k= \lfloor 2^n u_{l-1}\rfloor}^{\lfloor 2^n u_l\rfloor} \sum_{j=l}^d \theta_j \Big(\frac{1}{\Gamma_{2^n}}\Big)^{1/\alpha_n(\frac{k}{2^n})} X(k,n)\right\} \nonumber\\
&=& \lim_{n \rightarrow \infty} \mathbb{E} \exp\left\{ -\sum_{l=1}^d \sum_{k= \lfloor 2^n u_{l-1}\rfloor}^{\lfloor 2^n u_l\rfloor} \Big| \sum_{j=l}^d \theta_j\Big|^{\alpha_n(\frac{k }{2^n})} \frac{1}{ 2^n} \frac{2^n}{\Gamma_{2^n}} \right\} \nonumber\\
&=& \exp\left\{ -\sum_{l=1}^d \int_ {u_{l-1}}^ {u_l } \Big|\sum_{j=l}^d \theta_j \Big|^{\alpha (s)} ds \right\}\nonumber\\
&=& \exp\left\{ - \int \Big| \sum_{j=1}^d \theta_j \mathbf{1}_{ [0, u_j) }(s) \Big|^{\alpha (s)} ds \right\},\nonumber \end{eqnarray} which gives the joint characteristic function of $L_I.$
Second, we prove that $L^{(n)}_{C}( u)$ converges to $L_{I}( u)$ in $(D[0,1],d_S)$. Again by Theorem 15.6 of Billingsley \cite{B68}, it suffices to show that \begin{eqnarray}\label{sfffds}
\mathbb{P}\Big( \Big| L^{(n)}_{C}(u)-L^{(n)}_{C}(u_1)\Big|\geq \lambda,\ \Big|L^{(n)}_{C}(u_2)-L^{(n)}_{C}(u)\Big|\geq \lambda \Big) \leq \frac{C}{\lambda^{2\gamma } } \Big[ u_2 - u_1 \Big]^2 \end{eqnarray} for $u_1 \leq u \leq u_2, \lambda> 0$ and $n\geq 1$, where $\gamma = a\mathbf{1}_{[2,\, \infty)} (\lambda)+ b\mathbf{1}_{(0,\, 2 )}(\lambda)$ and $C$ is a constant depending only on $a$ and $b.$ We need only consider the case of $u_2-u_1 \geq 1/2^n.$ Since $L^{(n)}_{C}(u)-L^{(n)}_{C}(u_1)$ and $L^{(n)}_{C}(u_2)-L^{(n)}_{C}(u)$ are conditionally independent given $\Gamma_{2^n}$, it follows that \begin{eqnarray}
&&\mathbb{P}\Big( \Big| L^{(n)}_{C}(u)-L^{(n)}_{C}(u_1)\Big|\geq \lambda,\ \Big|L^{(n)}_{C}(u_2)-L^{(n)}_{C}(u)\Big|\geq \lambda \ \Big|\ \Gamma_{2^n} \Big)=
\nonumber \\ && \ \ \ \ \ \ \ \ \ \ \ \mathbb{P}\Big( \Big| L^{(n)}_{C}(u)-L^{(n)}_{C}(u_1)\Big|\geq \lambda \ \Big|\ \Gamma_{2^n}\Big) \ \mathbb{P}\Big(\Big|L^{(n)}_{C}(u_2)-L^{(n)}_{C}(u)\Big|\geq \lambda \ \Big|\ \Gamma_{2^n} \Big). \label{ineq18} \end{eqnarray} It is easy to see that \begin{eqnarray*}
&&\mathbb{P}\Big( \Big| L^{(n)}_{C}(u)-L^{(n)}_{C}(u_1)\Big|\geq \lambda \ \Big|\ \Gamma_{2^n} \Big) \ \leq\ \frac{\lambda}{2} \int_{-2/\lambda}^{2/\lambda } \Bigg( 1- \mathbb{E} \Big[e^{i\theta \big(L^{(n)}_{C}(u)-L^{(n)}_{C}(u_1)\big)} \ \Big|\ \Gamma_{2^n}\Big] \Bigg) d\theta \\
&& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \ \frac{\lambda}{2} \int_{-2/\lambda}^{2/\lambda } \Bigg( 1- \mathbb{E} \Big[ \exp\Big\{- \sum_{k=\lfloor2^nu_1\rfloor+1}^{\lfloor2^nu\rfloor} \frac{1}{\Gamma_{2^n}} |\theta |^{\alpha_n(\frac{k}{2^n})} \Big\}\ \Big|\ \Gamma_{2^n}\Big] \Bigg) d\theta \\
&& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \leq \ \frac{\lambda}{2} \int_{-2/\lambda}^{2/\lambda }\sum_{k=\lfloor2^nu_1\rfloor+1}^{\lfloor2^nu\rfloor} \frac{1}{ \Gamma_{2^n}} |\theta |^{\alpha_n(\frac{k}{2^n})} d\theta \\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \leq\ \frac{C_1}{\lambda^{\gamma} } \frac{2^n}{ \Gamma_{2^n}} \Bigg[ \frac{\lfloor2^nu\rfloor- \lfloor2^nu_1\rfloor}{2^n} \Bigg], \end{eqnarray*} where $C_1$ is a constant depending only on $a$ and $b.$
Similarly, it holds \begin{eqnarray}
\mathbb{P}\Big( \Big|L^{(n)}_{C}(u_2)-L^{(n)}_{C}(u)\Big|\geq \lambda \ \Big|\ \Gamma_{2^n}\Big) \leq \frac{C_2}{\lambda^{\gamma} } \frac{2^n}{ \Gamma_{2^n}}\Bigg[ \frac{\lfloor2^nu_2\rfloor- \lfloor2^nu\rfloor}{2^n} \Bigg] , \end{eqnarray} where $C_2$ is a constant depending only on $a$ and $b$. From (\ref{ineq18}), we find \begin{eqnarray*}
&&\mathbb{P}\Big( \Big| L^{(n)}_{C}(u)-L^{(n)}_{C}(u_1)\Big|\geq \lambda,\ \Big|L^{(n)}_{C}(u_2)-L^{(n)}_{C}(u)\Big|\geq \lambda \Big) \\ &\leq& \frac{C_1C_2}{\lambda^{ 2\gamma } } \Bigg[ \frac{\lfloor2^nu\rfloor- \lfloor2^nu_1\rfloor}{2^n} \Bigg]\Bigg[ \frac{\lfloor2^nu_2\rfloor- \lfloor2^nu\rfloor}{2^n} \Bigg] \, \mathbb{E}\Big[ \Big(\frac{2^n}{ \Gamma_{2^n}} \Big)^2 \Big]\\ &\leq& \frac{C }{\lambda^{ 2\gamma } } \Big[ u_2 - u_1 \Big]^2 , \end{eqnarray*} where $C$ is a constant depending only on $a$ and $b$. This completes the proof of (\ref{sfffds}).
\qed
\begin{remark} Let us comment on Theorem \ref{fhnk}. \begin{enumerate} \item We can define the independent-increments MsLM $\{L_{I}(x): \ x \in \mathbb{R} \}$ on the whole line as follows. Let $\alpha(x), \ x \in \mathbb{R},$ be a continuous function ranging in $[a,b] \subset (0,2]$, and satisfies condition (\ref{coalph}) uniformly for all $x$ in finite interval as $t \searrow 0$. Set the functions $\alpha_k(x)=\alpha(x+k)$ for all $k \geq 0$ and $x \in [0, 1].$ For any $\alpha_k(x)$, by Theorem \ref{fhnk}, we can construct MsLM $$L_{I_k }(x): [0,1] \rightarrow \mathbb{R},\ \ \ \ \ k\geq 0.$$ Taking a sequence of independent processes $L_{I_k }(x),\ x \in [0, 1],$ we define $\{L_{I }(x): x \geq 0 \}$ by gluing together the parts, more precisely by \begin{eqnarray}\label{hnkhjs} L_{I }(x) = L_{I_{\lfloor x \rfloor} }(x-\lfloor x\rfloor) + \sum_{k=0}^{\lfloor x\rfloor-1} L_{I_k }(1),\ \ \ \ \textrm{for all} \ x\geq 0. \end{eqnarray} Similarly, for $x< 0,$ we can define $L_{I }(x)=L_{I }(-x),$ since the function $\beta(x)=\alpha (-x)$ is defined on $[0, +\infty).$
\item Let $(\phi(n))_{n \in \mathbb{N}}$ be a sequence of numbers satisfying $\phi(n)\rightarrow \infty$ as $n\rightarrow \infty.$ Assume that $\alpha(u)$ is continuous in $[0,1].$ By an argument similar to the proof of Theorem \ref{fhnk}, the sequence of processes \begin{eqnarray}\label{fdvs} \widehat{L}_{I } ^{(n)} (u)= \sum_{k=1}^{\lfloor \phi(n) u \rfloor} \Big(\frac{1}{\phi(n)}\Big)^{1/\alpha (\frac{k}{\phi(n)})} X(k,n)\ , \ \ \ \ \ u \in [0,1], \end{eqnarray} tends in distribution to $L_I$ in $(D[0,1],d_S).$ Since $\alpha(u)$ is continuous, it is easy to see that $$\alpha \left(\frac{\lfloor \phi(n)u \rfloor}{\phi(n)} \right)\rightarrow\alpha (u)\ \ \ \ \textrm{as} \ n\rightarrow \infty. $$ By the fact that the summands of (\ref{fdvs}) verify $$\Big(\frac{1}{\phi(n)}\Big)^{1/\alpha\big(\frac{\lfloor \phi(n)u \rfloor}{\phi(n)}\big)} X(\lfloor \phi(n)u \rfloor,n)\sim S_{\alpha(\frac{\lfloor \phi(n)u \rfloor}{\phi(n)}) }\Bigg( \Big(\frac{1}{\phi(n)}\Big)^{1/\alpha\big(\frac{\lfloor \phi(n)u \rfloor}{\phi(n)}\big)}, 0 , 0 \Bigg),$$
equality (\ref{fdvs}) means that the increment at the point $u$ of an $\alpha(u)-$multistable process $L_{I} (u)$ behaves locally like an $\alpha (u)-$stable random variable, but with the stability index $\alpha (u)$ varying with $u$.
\item If $\alpha(u)\equiv\alpha$ for a constant $\alpha \in (0,2]$, then $L_{I} (u)$ is just the usual symmetric $\alpha-$stable L\'{e}vy motion $L_{\alpha} (u)$. Hence, inequality (\ref{fdvs}) gives an equivalent definition of the symmetric $\alpha-$stable L\'{e}vy motions: there is a sequence of independent and identically distributed (i.i.d.) symmetric $\alpha-$stable random variables $(Y_{k})_{k \in \mathbb{N}}$ with an unit scale parameter such that \begin{eqnarray} L_{\alpha}^{(n)} (u) = \sum_{k=1}^{\lfloor nu \rfloor} \frac{1}{n^{1/\alpha}} Y_{k} \ , \ \ \ \ \ u \in [0,1], \end{eqnarray} tends in distribution to $L_\alpha$ in $(D[0,1],d_S).$ This result is known as stable functional central limit theorem.
\item A slightly different method to construct $L_{I}(u)$ can be stated as follows. Assume that $\big( X( \frac{k}{ 2^n}) \big)_{n \in \mathbb{N}, \ k=1,...,2^n} $ is a family of independent and symmetric $\alpha(\frac{k}{2^n})-$stable random variables with the unit scale parameter. Then it holds \begin{eqnarray}\label{scv} L_{I }(u) = \lim_ {n\rightarrow \infty } \sum_{k=1}^{\lfloor 2^n u \rfloor} \Big(\frac{1}{2^n}\Big)^{1/\alpha(\frac{k}{2^n})} X\Big( \frac{k}{ 2^n}\Big) , \ \ \ \ \ \ \ \ \ \ u \in [0,1], \end{eqnarray} where convergence is in $(D[0,1],d_S).$ To highlight the differences between the two methods (\ref{multilevyI}) and (\ref{scv}), note that $X( \frac{k}{ 2^n})=X( \frac{2k}{ 2^{n+1}})$, while $X( k, n )$ and $X( 2k, n+1)$ are two i.i.d. random variables.
\item Inspecting the construction of field based MsLM in Falconer and L\'{e}vy V\'{e}hel \cite{FL09}, it seems that the sequence of processes \begin{eqnarray} L^{(n)}_{F}(u) {= } \sum_{k=1}^{\lfloor 2^n u \rfloor} \Big(\frac{1}{2^n}\Big)^{1/\alpha_n(u)} X(k,n) , \ \ \ \ \ \ \ \ \ \ u \in [0,1], \end{eqnarray} tends in distribution to $L_F(u)$ in $(D[0,1],d_S)$. Unfortunately, it is not true in general. We have the following counter example.
\textbf{\emph{Example 1.}} Consider the case of $\alpha_n(u)=\alpha(u)=\frac b 2 \mathbf{1}_{\{0\leq u\leq \frac b2 \} } + u \mathbf{1}_{\{ \frac b2 < u \leq 1 \}}.$ The characteristic function of $L^{(n)}_{F}(u)$ is given by the following equality: for any $\theta \in \mathbb{R},$ \begin{eqnarray}
\mathbb{E} e^{i \theta L^{(n)}_{F}(u)}
&{= }& \prod_{k=1}^{\lfloor 2^n u \rfloor} \mathbb{E}\exp\left\{i \theta \Big(\frac{1}{2^n}\Big)^{1/\alpha (u)} X(k,n) \right\} \nonumber\\
&{= }& \exp\left\{-\sum_{k=1}^{ \lfloor2^n u\rfloor } |\theta|^{ \alpha( \frac{k}{ 2^n})} \Big(\frac{1}{2^n} \Big)^{ \alpha(\frac{k}{ 2^n})/\alpha(u)} \right\} ,\ \ \ \ \ u \in [0,1]. \end{eqnarray} Since, for all $u \in (\frac b 2, 1]$ and $\theta\neq 0$, \begin{eqnarray}
\sum_{k=1}^{ \lfloor2^n u\rfloor } |\theta|^{ \alpha(\frac{k}{ 2^n})} \Big(\frac{1}{2^n} \Big)^{ \alpha(\frac{k}{ 2^n})/\alpha(u)} \geq \sum_{k=1}^{ \lfloor2^n b/2\rfloor } |\theta|^{b/2}\Big(\frac{1}{2^n} \Big)^{ b/2u } \rightarrow \infty, \ \ \ \ \ n\rightarrow \infty, \end{eqnarray}
we have $L^{(n)}_{F}(u)\rightarrow 0$ for all $u \in (\frac b 2, 1]$. Thus $L^{(n)}_{F}(u)$ does not tend in distribution to $L_F(u)$ in $(D[0,1],d_S).$
\end{enumerate}
\end{remark}
\section{Continuous Approximation of MsLM}\label{contapp} It is easy to see that when $\alpha(u)$ is a constant, then the independent-increments MsLM reduce to $\alpha-$stable L\'{e}vy motions. It is well known that $\alpha-$stable L\'{e}vy motions are stochastic H\"{o}lder continuous but not continuous. We wonder if there exists a continuous approximation of independent increments MsLM? The answer is yes.
\subsection{A continuous stable process} First, we shall construct a continuous stable process. To this end, we shall make use of the following useful theorem. \begin{theorem}\label{theo1} If the i.i.d.\ random variables $(Z_{jk})_{j,k}$ follow an $\alpha-$stable law, then it holds, for all $c> 1/\alpha,$ \[
\mathbb{P}\left( \bigcup_{i=1}^{\infty} \bigcap_{j\geq i}^{\infty} \max_{ k=0,...,2^{j}-1 } |Z_{jk}| \leq 2^{jc}\right)=1. \] \end{theorem} \noindent\emph{Proof.} We only need to show that, for all $c> 1/\alpha$, \[
\mathbb{P}\left( \bigcap_{i=1}^{\infty} \bigcup_{j\geq i}^{\infty} \max_{ k=0,...,2^{j}-1 } |Z_{jk}| > 2^{jc}\right)=0. \] By Borel-Cantelli Lemma, it is sufficient to prove, for all $c> 1/\alpha$, \begin{equation}\label{ssd}
\sum_{j\geq 1} \mathbb{P}\left( \max_{ k=0,...,2^{j}-1 } |Z_{jk}| > 2^{jc}\right) < \infty. \end{equation} To prove (\ref{ssd}), we need the following technical lemma (cf.\ Property 1.2.15 of \mbox{Samorodnitsky} and Taqqu \cite{ST94} for details). \begin{lemma} \label{lem1} Let $Z\sim S_\alpha(\sigma,\beta,\mu)$ with $0 <\alpha < 2.$ Then \begin{displaymath} \left\{ \begin{array}{ll} \lim_{\lambda \rightarrow \infty} \lambda^{\alpha} \mathbb{P}( Z> \lambda) &=\ C_\alpha \frac{1+\beta}{2}\sigma^\alpha, \\ \\ \lim_{\lambda \rightarrow \infty} \lambda^{\alpha} \mathbb{P}( Z< - \lambda) & =\ C_\alpha \frac{1-\beta}{2}\sigma^\alpha. \end{array} \right. \end{displaymath} \end{lemma} Return to the proof of (\ref{ssd}). For all $c> 1/\alpha$ and all $j$ large enough, we have \begin{eqnarray}
\mathbb{P}\left( \max_{ k=0,...,2^{j}-1 } |Z_{jk}| > 2^{jc}\right) &=& 1- \mathbb{P}\left( |Z_{jk}| \leq 2^{jc}\ \textrm{for all}\ k=0,...,2^{j}-1 \right) \nonumber\\
&=& 1 - \prod_{k=0}^{2^{j}-1} \mathbb{P}\left( |Z_{jk}| \leq 2^{jc} \right). \label{sns2} \end{eqnarray} Then, by equality (\ref{sns2}) and Lemma \ref{lem1}, we deduce \begin{eqnarray*}
\mathbb{P}\left( \max_{ k=0,...,2^{j}-1 } |Z_{jk}| > 2^{jc}\right)
&=& 1 - \left( 1+ O\Big(\frac { 1} { 2^{ j\alpha c} } \Big) \right)^{2^{j}} \ \ \\ &=& O\Big(\frac{1}{ 2^{ j(\alpha c -1)} }\Big),\ \ \ j\rightarrow \infty. \end{eqnarray*} Thus we obtain (\ref{ssd}) for all $c> 1/\alpha$.\qed
In the following theorem, we give a construction of continuous stable process. First, we recall the definition of the ``triangle'' function: \begin{displaymath} \varphi(t) = \left\{ \begin{array}{ll} 2t & \textrm{\ \ for $t \in [0, 1/2)$}\\ 2-2t & \textrm{\ \ for $t \in [1/2, 1]$}\\ 0& \textrm{\ \ otherwise.} \end{array} \right. \end{displaymath} Define $\varphi_{jk}(t)=\varphi(2^{j}t-k),$ for $j=0,1,...,$ and $k=0,...,2^{j}-1.$ \begin{theorem}\label{th2} Assume the i.i.d.\ random variables $(Z_{jk})_{j,k}$ follow a symmetric $\alpha-$stable law with the unit scale parameter. Then, for all $d> 1/\alpha$, the process $$X(t) = \sum_{j=0}^{\infty}\sum_{k=0}^{2^{j}-1}2^{-jd}Z_{jk}\varphi_{jk}(t),\ \ \ \ t \in [0,1],$$
is a continuous and symmetric $\alpha$-stable process. When $d=1/\alpha$, the process $X(t)$ is also a symmetric, may not be continuous, $\alpha$-stable process in $L^p(\Omega\times [0,1])$ for any $0<p<\alpha$. \end{theorem} \emph{Proof.} Set $X_{-1}\equiv 0$ and define the sequence of processes $(X_j)_{j \in \mathbb{N}}$ by: \[ X_j(t)=X_{j-1}(t)+\sum_{k=0}^{2^{j}-1}2^{-jd}Z_{jk}\varphi_{jk}(t). \] First we show that the sequence of processes $(X_j)_{j \in \mathbb{N}}$ converges almost surely uniformly. Indeed, for all $t$, \[ X_j(t)-X_{j-1}(t)=\sum_{k=0}^{2^{j}-1}2^{-jd}Z_{jk}\varphi_{jk}(t). \]
Since the functions $(\varphi_{jk} )_{j,k}$ have disjoint supports and $|\varphi_{jk} |\leq 1$, it follows that \[
|| X_j(t)-X_{j-1}(t) ||_{\infty}=2^{-jd} \max_{k=0,...,2^{j-1}} |Z_{jk}|. \]
Theorem \ref{theo1} entails that $(X_j)_{j \in \mathbb{N}}$ converges almost surely in $C([0,1], || \cdot||_{\infty})$ to a continuous process $X$ for all $d> 1/\alpha.$ When $d=1/\alpha$, we show that the sequence $(X_j)_{j \in \mathbb{N}}$ converges to a random variable $X$ in $L^p(\Omega\times [0,1])$ for any $0<p<\alpha$. Indeed, for any $0<p<\alpha$, \begin{eqnarray}
\int_0^1 \mathbb{E}|X_j(t)-X_{j-1}(t)|^p dt &\leq& 2^{-jp/\alpha}\mathbb{E}|Z_{00}|^p \sum_{k=0}^{2^{j}-1}\int_0^1 \varphi_{jk}^p(t)dt \nonumber \\
&\leq& 2^{-jp/\alpha}\mathbb{E}|Z_{00}|^p\int_0^1 \varphi_{00}^p(t)dt \nonumber \\
&=& 2^{-jp/\alpha}\mathbb{E}|Z_{00}|^p , \end{eqnarray} this entitles convergence of $(X_j)_{j \in \mathbb{N}}$.
Next, we prove that $X$ is a symmetric $\alpha-$stable process. By Theorem 3.1.2 of Samorodnitsky and Taqqu (1994), we only need to check that all linear combinations \[ \sum_{k=1}^d b_kX(t_k),\ \ d\geq 1,\ \ t_1,...,t_d \in [0,1]\ \textrm{and}\ b_1,...,b_d\ \textrm{real} \] are symmetric $\alpha-$stable. We distinguish two cases as follows. Define $$D_n =\left\{ \frac{ k}{2^n}
: 0 \leq k \leq 2^n \right\}$$ and $D=\bigcup_{n=0,1,...} D_n$.\\ \textbf{i)} If $t_k \in D, $ then all random variables $X(t_k), 1\leq k \leq d,$ are symmetric $\alpha-$stable. Thus all linear combinations $\sum_{k=1}^d b_kX(t_k)$ are symmetric and $\alpha-$stable.\\ \textbf{ii)} For $t_k \in [0,1], 1\leq k \leq d,$ we have $t_{kl} \in D$ such that $t_{kl} \rightarrow t_k, l\rightarrow \infty$. Since $X$ is continuous, we have $$\sum_{k=1}^d b_kX(t_k)= \lim_{j\rightarrow \infty} \sum_{k=1}^d b_kX(t_{kl}).$$ Its characteristic function has the following form: \begin{eqnarray*} \mathbb{E} \exp\Bigg\{i\theta \sum_{k=1}^d b_kX(t_k) \Bigg\} &=& \lim_{l\rightarrow \infty} \mathbb{E} \exp\Bigg\{i\theta \sum_{k=1}^d b_kX(t_{kl}) \Bigg\}. \end{eqnarray*} It is easy to see that the scale parameter of $\sum_{k=1}^d b_kX(t_{kl})$ is
$$ \sigma_{l}(\alpha)=\left( \sum_{j=0}^{\infty}\sum_{i=0}^{2^{j}-1} \Big( \sum_{k=1}^d |b_k| 2^{-jd} \varphi(2^jt_{kl}-i)\Big)^\alpha \right)^{1/\alpha} .$$
Since at most one summand of the sum $\sum_{i=0}^{2^{j}-1} 2^{-jd} \varphi(2^jt-i)$ is non-zero and $$ \sum_{i=0}^{2^{j}-1} \Big(\sum_{k=1}^d |b_k| 2^{-jd} \varphi(2^jt_{kl} -i)\Big)^\alpha \leq d\, b^\alpha 2^{-j\alpha d}, $$
where $b=\max\{|b_k|, 1\leq k \leq d \},$ then $\sigma(\alpha) = \lim_{l\rightarrow \infty}\sigma_l(\alpha)$ exists for $d\geq 1/\alpha$ and \begin{eqnarray*} \mathbb{E} \exp\Bigg\{i\theta \sum_{k=1}^d b_kX(t_k) \Bigg\}
&=& \lim_{j\rightarrow \infty} \exp\Big\{ - \sigma_l(\alpha)^{\alpha} |\theta|^{\alpha} \Big\} \ \ \\
&=& \exp\Big\{ - \sigma(\alpha)^{\alpha} |\theta|^{\alpha} \Big\}. \end{eqnarray*} This implies that all linear combinations $\sum_{k=1}^d b_kX(t_k)$ are symmetric $\alpha-$stable random variables. This completes the proof. \qed
One deduces the scale parameter $\sigma(t)$ of the process $X(t)$ is given as follows \[ \sigma^\alpha(t)= \sum_{j=0}^{\infty}\sum_{k=0}^{2^{j}-1} \Big( 2^{-jd} \varphi(2^jt -k)\Big)^\alpha. \] By noting that at most one $\varphi(2^jt -k)$ is non-zero for all $j$, we have the following estimation of the scale parameter \[
\varphi^{1/\alpha}( t ) \ \leq \ \sigma (t) \ \leq \ \Big(\frac{1}{1-2^{-\alpha d}} \Big)^{1/\alpha},\ \ \ \ t \in [0, 1]. \] It is worth noting that when $t\neq0,1,$ we have $\sigma (t)>0.$ This observation will be useful to establish
continuous approximations of MsLM in the next subsection.
\subsection{Continuous approximations of MsLM} In Theorem \ref{fhnk}, we establish discrete approximations of the independent-increments MsLM. In this subsection, we shall give continuous approximations of the independent-increments MsLM. It is worth to noting that one cannot make use of the method of Theorem \ref{th2} to establish continuous approximations of MsLM in general, since a sum of two stable random variables with different stability indices is not a stable random variable. To obtaining continuous approximations of the independent-increments MsLM, our main method is to replace the summands in (\ref{multilevyI})
by a sequence of independent and continuous stable processes starting at $0$, for instance the stable processes established in Theorem \ref{th2}. \begin{theorem}\label{cosa} Let $\alpha(u)$ be a continuous function ranging in $[a,b] \subset (0,2]$. Assume that $\left(X_{\alpha(\frac{k}{2^n})}( t) \right)_{ n \in \mathbb{N}, \, k=0,...,2^n-1}$
is a family of independent and continuous $\alpha(\frac{k}{2^n})-$stable random processes. Assume $X_{\alpha(\frac{k}{2^n})} ( 0)=0$ and $\sigma_{\alpha(\frac{k}{2^n})}(t)>0$ for all $t\in (0, 1]$ and all $n \in \mathbb{N}, \, k=0,...,2^n-1$, where $\sigma_{\alpha(\frac{k}{2^n})}(t)$ is the scale parameter of $X_{\alpha(\frac{k}{2^n})} (t)$. Define \begin{eqnarray}\label{clevyc} S_n(u) & = & \Big(\frac{1}{2^n}\Big)^{\alpha(\frac{\lfloor 2^n u \rfloor}{2^n})} \frac{1}{\sigma_{\alpha(\frac{\lfloor 2^n u \rfloor}{2^n})}(\frac{1}{2^n})} X_{\alpha(\frac{\lfloor 2^n u \rfloor}{2^n})} \Bigg(u- \frac{\lfloor 2^n u \rfloor}{2^n} \Bigg) \nonumber\\ && \ \ \ \ + \sum_{k=0}^{\lfloor 2^n u \rfloor -1} \Big(\frac{1}{2^n}\Big)^{\alpha(\frac{k}{2^n})} \frac{1}{\sigma_{\alpha(\frac{k}{2^n})}(\frac{1}{2^n})} X_{ \alpha(\frac{k}{2^n})} \Bigg( \frac{1}{2^n} \Bigg), \ \ \ \ \ u \in [0,1]. \end{eqnarray} Then $(S_n)_{n \in \mathbb{N}}$ is a sequence of continuous processes and the process $S_n(u), u \in [0,1],$ tends in distribution to $L_I(u)$ in $(D[0,1],d_S).$ \end{theorem}
By the definition of $S_n(u)$ in (\ref{clevyc}), it seems that the process $S_n(u)$ restores more and more details of $L_{I }(u)$ when $n$ is increasing.
It is worth noting that when $\alpha(u)\equiv \alpha$ for a constant $\alpha \in (0, 2]$, Theorem \ref{cosa} gives continuous approximations to the usual symmetric $\alpha-$stable L\'{e}vy motion $L_\alpha(u).$
\noindent\emph{Proof.} It is easy to see that the first item in the right hand side of (\ref{clevyc}) converges to zero in distribution as $n\rightarrow \infty$, i.e., \begin{eqnarray} \lim_{n\rightarrow \infty}\left( \Big(\frac{1}{2^n}\Big)^{\alpha(\frac{\lfloor 2^n u \rfloor}{2^n})} \frac{1}{\sigma_{\alpha(\frac{\lfloor 2^n u \rfloor}{2^n})}(\frac{1}{2^n})} X_{\alpha(\frac{\lfloor 2^n u \rfloor}{2^n})} \Bigg(u- \frac{\lfloor 2^n u \rfloor}{2^n} \Bigg) \right) = 0 \end{eqnarray} in distribution. Notice that the summands \begin{eqnarray}
\frac{1}{\sigma_{\alpha(\frac{k}{2^n})}(\frac{1}{2^n})} X_{ \alpha(\frac{k}{2^n})} \Bigg( \frac{1}{2^n} \Bigg) \end{eqnarray} in the right hand side of (\ref{clevyc}) are independent $\alpha(\frac{k}{2^n})-$stable random variables with the unit scale parameter. Using Theorem \ref{fhnk}, we find that the process $S_n(u), u \in [0,1],$ tends in distribution to $L_I(u)$ in $(D[0,1],d_S).$ \qed
\section{Integrals of Multistable L\'{e}vy Measure}\label{endsection} Let $\alpha=\alpha(u), u \in [0,1],$ be a c\`{a}dl\`{a}g function ranging in $[a,b] \subset (0,2].$ Denote by \[
\mathcal{L}_{\alpha }[0,1] =\Big\{ f: f \textrm{ is measurable with } ||f||_{\alpha } < \infty \Big\}, \] where \[
||f||_{\alpha }
:= \inf \left\{ \lambda > 0: \int_{0}^{1}\Big|\frac{f(x)}{\lambda} \Big|^{ \alpha(x)} dx =1 \right\} \ \ \ \ \ \textrm{and} \ \ \ \ \ ||0||_{\alpha}=0. \]
Note that $||\cdot||_{\alpha }$ is a quasinorm; see Falconer and Liu \cite{FL12} and Ayache \cite{AA13}. Using the Kolmogorov consistency conditions and the L\'{e}vy continuity theorem, Falconer and Liu \cite{FL12} (see also Falconer \cite{F09}) proved that the characteristic function, for all $(\theta_1,...,\theta_d)\in \mathbb{R}^d,$ \begin{eqnarray}\label{fli1}
\mathbb{E} \exp\left\{ i \Bigg(\sum_{j=1}^d \theta_j I(f_j) \Bigg) \right\} =\exp \left\{- \int \Big| \sum_{j=1}^d \theta_j f_j(x) \Big|^{\alpha(x)} dx \right\} \ \ \ \end{eqnarray} well defines a consistent probability distribution of the random vector $(I(f_1), I(f_2),...,I(f_d))\in \mathbb{R}^d$ on the functions $f_j \in \mathcal{L}_{\alpha }[0,1],$ where $I(f)=\int f(x) M_\alpha(d x).$ They called $M_\alpha$ the multistable L\'{e}vy measure and $I(f)=\int f(x) M_\alpha(d x)$ the integral with respect to $M_\alpha.$ Moreover, they also showed that the integrals of functions with disjoint supports are independent. In particular, it holds $$L_I(u)=\int \mathbf{1}_{[0,\ u]}(x)M_\alpha ( dx),\ \ \ \ \ u \in [0,1].$$
In the following theorem, we give an alternative definition of the integrals based on the weighted sums of independent random variables. \begin{theorem} Let $\big( X(k,n) \big)_{n \in \mathbb{N}, \ k=1,...,2^n} $ be defined by Theorem \ref{fhnk}. Then, for any $f \in \mathcal{L}_{\alpha }[0,1],$ it holds \begin{eqnarray} \int_0^1 f(x) M_\alpha(d x) = \lim_ {n\rightarrow \infty } \sum_{k=1}^{ 2^n } \Big(\frac{1}{2^n}\Big)^{1/\alpha(\frac{k}{2^n})} f\Big(\frac{k}{2^n}\Big) X(k,n) \end{eqnarray} in distribution. \end{theorem} \emph{Proof.} Denote by \[
S(k,n)=\Big(\frac{1}{2^n}\Big)^{1/\alpha(\frac{k}{2^n})} f\Big(\frac{k}{2^n}\Big) X(k,n) \ \ \ \ and \ \ \ \ X_n = \sum_{k=1}^{ 2^n } S(k,n) . \] It is easy to see that, for any $\theta \in \mathbb{R},$ \begin{eqnarray*} \mathbb{E} e^{i\, \theta X_n } &=& \prod_{k=1}^{ 2^n } \mathbb{E} e^{i \theta S(k,n) }
= \exp\Bigg\{-\Big| \theta f\Big(\frac{k}{2^n}\Big)\Big|^{\alpha(k/2^n)} \frac{1}{2^n} \Bigg\} . \end{eqnarray*} Hence, we have \begin{eqnarray*}
\lim_{n\rightarrow \infty}\mathbb{E} e^{i\, \theta X_n } &=& \exp \left\{- \int_0^1 \Big| \theta f (x) \Big|^{\alpha(x)} dx \right\}, \end{eqnarray*} which means $ \lim_ {n\rightarrow \infty } X_n =\int_0^1 f(x) M_\alpha(d x)$ in distribution by the definition of the multistable integrals with respect to the multistable L\'{e}vy measure $M_\alpha.$ \qed
The following theorem relates the convergence of a sequence of $\alpha(u)-$multistable integrals to the convergence of the sequence of integrands.
\begin{theorem}\label{scxfd} Assume $X_j=\int_0^1 f_j(x) M_\alpha(d x)$ and $X=\int_0^1 f(x) M_\alpha(d x),$ for $f_j, j=1,2,..., f \in \mathcal{L}_{\alpha }[0,1].$ Then \[ \lim_{j\rightarrow \infty} X_j = X \] in probability, or \[ \lim_{j\rightarrow \infty} ( X_j -X)= 0 \] in distribution, if and only if \[
\lim_{j\rightarrow \infty} || f_j - f ||_{\alpha } = 0. \] \end{theorem} \emph{Proof.} The convergence $\lim_{j\rightarrow \infty} X_j = X $ in probability is equivalent to $\lim_{j\rightarrow \infty} (X_j -X)= 0 $ in probability and hence to the convergence in distribution to zero of the sequence $(X_j -X)_{j=1,2,...}.$ If $X_j-X$ convergence in distribution to $0,$ then, for any $\theta \in \mathbb{R},$ \begin{eqnarray}
1= \lim_{j\rightarrow \infty}\mathbb{E} e^{i\, \theta (X_j -X) } = \lim_{j\rightarrow \infty} \exp \left\{- \int_0^1 \Big| \theta \Big(f_j (x) - f (x)\Big) \Big|^{\alpha(x)} dx \right\}, \end{eqnarray} which is equivalent to, for any $\lambda>0,$ \[
\lim_{j\rightarrow \infty} \int_0^1 \Big| \frac{ f_j (x) - f (x)}{\lambda } \Big|^{\alpha(x)} dx = 0. \]
This equality means $\lim_{j\rightarrow \infty} || f_j - f ||_{\alpha } = 0.$ \qed
The last theorem shows that convergence in probability of multistable integrals coincides with convergence in quasinorm $||\cdot||_{\alpha }$.
The convergence $\lim_{j\rightarrow \infty} X_j = X $ almost surely implies the convergence $\lim_{j\rightarrow \infty} X_j = X $ in probability. Thus the following corollary is obvious. \begin{corollary} Assume that $X_j, j=1,2,..$ and $X$ are defined by Theorem \ref{scxfd}. If \[ \lim_{j\rightarrow \infty} X_j =X \]
almost surely, then \[
\lim_{j\rightarrow \infty} || f_j - f ||_{\alpha } = 0. \] \end{corollary}
\subsection{Independence} Independence of two multistable integrals imposes a stronger restriction on the integrands: they must almost surely have disjoint supports with respect to Lebesgue measure $\mathcal{L}$. Indeed, \begin{theorem}\label{THinde} Let $X_1= \int_0^1 f_1(x)M_\alpha(dx)$ and $X_2= \int_0^1 f_2(x)M_\alpha(dx)$ be two multistable integrals, where $f_j\in \mathcal{L}_{\alpha }[0,1], j=1,2.$ Assume either
$$[a,b] \subset (0, 2)$$ or
\begin{eqnarray} \label{tonghao} f_1(x)f_2(x) \geq 0\ \ \ \ \ \mathcal{L}-a.s.\textrm{ on }[0, \, 1]. \end{eqnarray} Then $X_1$ and $X_2$ are independent if and only if \begin{eqnarray}\label{dssdf} f_1(x)f_2(x) \equiv 0\ \ \ \ \ \mathcal{L}-a.s.\textrm{ on }[0, \, 1]. \end{eqnarray} \end{theorem} \emph{Proof.} Two multistable integrals $X_1$ and $X_2$ are independent if and only if, for any $(\theta_1, \theta_2) \in \mathbb{R}^2,$ \begin{eqnarray}\label{dfds} \mathbb{E} \exp\Big\{i(\theta_1 X_1+ \theta_2 X_2 ) \Big\}
&=& \mathbb{E} \exp\Big\{i\theta_1 X_1\Big\} \ \mathbb{E} \exp\Big\{i\theta_2 X_2\Big\}. \end{eqnarray} Notice that \begin{eqnarray*}
\mathbb{E} \exp\Big\{i(\theta_1 X_1+ \theta_2 X_2 ) \Big\} = \exp \left\{- \int_0^1 \Big| \sum_{j=1}^2 \theta_j f_j(x) \Big|^{\alpha(x)} dx \right\}, \ \ \ \end{eqnarray*} and that \begin{eqnarray*}
\mathbb{E} \exp\Big\{i\theta_1 X_1\Big\} \ \mathbb{E} \exp\Big\{i\theta_2 X_2\Big\} =\exp \left\{- \sum_{j=1}^2\int_0^1 \Big| \theta_j f_j(x) \Big|^{\alpha(x)} dx \right\}. \end{eqnarray*} Equating the moduli of (\ref{dfds}) gives \begin{eqnarray}\label{sddvkf}
\int_0^1 \Big| \sum_{j=1}^2 \theta_j f_j(x) \Big|^{\alpha(x)} dx= \sum_{j=1}^2\int_0^1 \Big| \theta_j f_j(x) \Big|^{\alpha(x)} dx . \end{eqnarray} Notice that (\ref{sddvkf}) implies that \begin{eqnarray}
\int_0^1 \Big| f_1(x) - f_2(x) \Big|^{\alpha(x)} dx &=& \int_0^1 \Big| f_1(x) \Big|^{\alpha(x)} dx + \int_0^1 \Big| f_2(x) \Big|^{\alpha(x)} dx.\label{sddhnlg2}\\
&=&\int_0^1 \Big| f_1(x) + f_2(x) \Big|^{\alpha(x)} dx \label{sddhnlg1} \end{eqnarray}
Assume $[a,b] \subset (0, 2).$ We argue as Lemma 2.7.14 of Samorodnitsky and Taqqu \cite{ST94}. When $\alpha \in (0, 2),$ the function $r_\alpha(u)=u^{\alpha/2}, u\geq 0,$ is strictly concave. Therefore, for fix $x \in [0, 1],$ \begin{eqnarray}
&& |f_1(x)+f_2(x)|^{\alpha(x)}+ |f_1(x)-f_2(x)|^{\alpha(x)} \nonumber \\
&& \ \ \ \ \ \ \ \ \ \ \ \ = 2 \, \frac{r_{\alpha(x)}(|f_1(x)+f_2(x)|^2) + r_{\alpha(x)}(|f_1(x)-f_2(x)|^2)}{2} \nonumber\\
&& \ \ \ \ \ \ \ \ \ \ \ \ \leq 2 \, r_{\alpha(x)} \Big( \frac{|f_1(x)+f_2(x)|^2 + |f_1(x)-f_2(x)|^2}{2} \Big) \nonumber\\ && \ \ \ \ \ \ \ \ \ \ \ \ = 2 \, r_{\alpha(x)} \big( f_1(x)^2 + f_2(x)^2 \big) \nonumber\\
&& \ \ \ \ \ \ \ \ \ \ \ \ \leq 2 \, \big( |f_1(x)|^{\alpha(x)}+ |f_2(x)|^{\alpha(x)} \big) \label{fdd1ss} \end{eqnarray} with equality in the preceding relations equivalent $f_1(x)f_2(x)=0.$ Inequalities (\ref{sddhnlg2}) and (\ref{sddhnlg1}) imply that \begin{eqnarray}
&& \int_0^1 \Big| f_1(x) - f_2(x) \Big|^{\alpha(x)} dx + \int_0^1 \Big| f_1(x) + f_2(x) \Big|^{\alpha(x)} dx \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = 2 \Bigg( \int_0^1 \Big| f_1(x) \Big|^{\alpha(x)} dx + \int_0^1 \Big| f_2(x) \Big|^{\alpha(x)} dx \Bigg). \end{eqnarray} Now (\ref{fdd1ss}) implies that the left-hand side of the last equality is always less than or equal to the right-hand side of the last inequality and, if they are equal, then necessarily (\ref{dssdf}) holds.
Assume (\ref{tonghao}). Then it holds $\big| f_1(x) - f_2(x) \big| \leq \big| f_1(x) + f_2(x) \big| \ \mathcal{L}-a.s.\textrm{ on }[0, \, 1].$ When $\alpha \in (0, 2],$ the function $r_\alpha(u)=u^{\alpha/2}$ is increasing in $u \in [0, \infty).$ Hence (\ref{sddhnlg1}) holds if and only if (\ref{dssdf}) holds.
This proves that (\ref{dssdf}) is a necessary condition for the independence of $X_1$ and $X_2.$ It is also sufficient because if (\ref{dssdf}) holds, then (\ref{sddvkf}) also holds. \qed
The preceding result is very useful and will often be used in the sequel. \begin{theorem}\label{Tdsde} Assume $f_j\in \mathcal{L}_{\alpha }[0,1],$ $j=1,...,d.$ Assume either $[a,b] \subset (0, 2)$ or $f_i(x)f_k(x) \geq 0$ $\ \mathcal{L}-a.s.\textrm{ on }[0, \, 1]$ for any subset $\{i, k\}$ of $\{1,2,..., d\}.$ The multistable integrals $X_j= \int_0^1 f_j(x)M_\alpha(dx),$ $j=1,...,d,$ are independent if and only if they are pairwise independent, i.e., if and only if
\begin{eqnarray}\label{gbjs} f_i(x)f_k(x)\equiv 0 \ \ \ \mathcal{L}-a.s.\textrm{ on }[0, \, 1] \end{eqnarray} for any subset $\{i, k\}$ of $\{1,2,..., d\}.$ \end{theorem} \emph{Proof.} Independence clearly implies pairwise independence. By Theorem \ref{THinde}, pairwise independence implies (\ref{gbjs}). If (\ref{gbjs}) holds, then it holds, for any $(\theta_1, ..., \theta_d) \in \mathbb{R}^d,$ \begin{eqnarray}
\int_0^1 \Big| \sum_{j=1}^d \theta_j f_j(x) \Big|^{\alpha(x)} dx= \sum_{j=1}^d \int_0^1 \Big| \theta_j f_j(x) \Big|^{\alpha(x)} dx . \end{eqnarray} Thus the joint characteristic function of $X_1,..., X_d$ factorizes \begin{eqnarray*} \mathbb{E} \exp\Big\{i \sum_{j=1}^d \theta_j X_j \Big\}
&=& \exp \left\{- \int_0^1 \Big| \sum_{j=1}^d \theta_j f_j(x) \Big|^{\alpha(x)} dx \right\} \\
&=& \exp \left\{- \sum_{j=1}^d\int_0^1 \Big| \theta_j f_j(x) \Big|^{\alpha(x)} dx \right\} \\
&=& \prod_{j=1}^d \mathbb{E} \exp\Big\{i \theta_j X_j \Big\}. \end{eqnarray*} This proves that $X_1,..., X_d$ are independent. \qed
\subsection{Stochastic H\"{o}lder continuity} We call a random process $X(u),\, u \in I,$ is \emph{stochastic H\"{o}lder continuous} of exponent $\beta \in (0, 1],$ if it holds \[
\limsup_{ u,r \in I,\ |u-r| \rightarrow 0 } \mathbb{P}\big(|X(u)-X(r)|\geq C |u-r|^\beta \big)=0 \] for a positive constant $C.$ It is obvious that if $X(u)$ is stochastic H\"{o}lder continuous of exponent $\beta_1 \in (0, 1],$ then $X(u)$ is stochastic H\"{o}lder continuous of exponent $\beta_2 \in (0, \beta_1].$
\noindent\textbf{Example 2.} \emph{Assume that a random process $X(u),\, u \in I,$ satisfies the following condition: there exist three strictly positive constants $\gamma, c, \rho$ such that \[
\mathbb{E} |X(u)-X(r)|^\gamma \leq c \, |u-r|^\rho,\ \ \ \ \ \ \ u,r \in I. \] Then $X(u), u \in I,$ is stochastic H\"{o}lder continuous of exponent $\beta \in (0, \min\{1, \rho/\gamma\} ).$ Indeed, it is easy to see that for all $u, r \in I,$ \begin{eqnarray*}
\mathbb{P}\Big( |X(u)-X(r)|\geq C |u-r|^\beta \Big) &\leq& \frac{\mathbb{E} |X(u)-X(r)|^\gamma }{C^\gamma |u-r|^{\beta \gamma}} \\
&\leq& \frac{c}{C^\gamma} |u-r|^{\rho-\beta \gamma}, \end{eqnarray*} which implies our claim. }
The following theorem gives a sufficient condition such that the integrals with respect to multistable L\'{e}vy measure $M_\alpha$ are stochastic H\"{o}lder continuous. \begin{theorem}\label{endthm} Assume that $X(t)= \int_0^1 f(t,x)M_\alpha(dx)$ is a multistable integral, where $f(t,x)$ is jointly measurable and $f(t,x) \in \mathcal{L}_{\alpha }[0,1]$ for all $t\in I.$ If there exist two constants $\eta > 0 $ and $C>0$ such that \begin{eqnarray}\label{fine37}
\int_0^1 \Big|
f(t,s)-f(v,s) \Big|^{\alpha(s)} ds
&\leq& C \, \Big| t - v \Big|^\eta, \ \ \ \ \ \ t, v \in I. \end{eqnarray} Then it holds \begin{eqnarray}\label{finein34}
\mathbb{P}( |X (t)-X (v)| \geq | t - v |^\beta) \leq C_{a,b}\ | t - v |^{ \eta- b\beta},\ \ \ \ \ \ t, v \in I, \end{eqnarray} where $C_{a,b}$ is a constant depending on $a, b$ and $C.$ In particular, it implies that $X(t)$ is stochastic H\"{o}lder continuous of exponent $\beta \in (0, \min\{1, \eta/b\})$.
\end{theorem}
\noindent\emph{Proof.} By the Billingsley inequality (cf. p.\ 47 of \cite{B68}) and (\ref{fine37}), it is easy to see that, for all $t, v \in I $ and all $x > 0,$ \begin{eqnarray*}
\mathbb{P}( |X (t)-X (v)| \geq x)
&\leq& \frac x2 \int_{-2/x}^{\,2/x} \left( 1 - \mathbb{E}e^ {i \theta \big(X (t)-X (v) \big)}\right)\, d\theta\\
&=& \frac x2 \int_{-2/x}^{\,2/x} \left( 1 - \exp \left\{- \int_0^1 \Big| \theta \Big( f(t,z)-f(v,z) \Big)\Big|^{\alpha(z)} dz \right\} \right)\, d\theta\\
&\leq& \frac x2 \int_{-2/x}^{\,2/x} \int_0^1 \Big| \theta \Big( f(t,z)-f(v,z) \Big)\Big|^{\alpha(z)} dz \, d\theta\\
&\leq& \frac x2 \Bigg[\int_{|\theta|< 1}\Big| \theta\Big|^{a} \, d\theta +\int_{1\leq |\theta| \leq 2/x} \Big| \theta \Big|^{b} d\theta \Bigg] \ C \Big| t - v \Big|^\eta\\
&\leq& C \left( \frac{x\, }{a+1} + \frac{2^{b+1} } {b+1 } \frac{1} {x^b } \right) \Big| t - v \Big|^\eta. \end{eqnarray*}
Taking $x= | t - v |^\beta,$ we obtain (\ref{finein34}). This implies that $X(t)$ is stochastic H\"{o}lder continuous of exponent $\beta \in (0, \min\{1, \eta/b\})$.
\qed
As an example to illustrate Theorem \ref{endthm}, consider the weighted MsLM introduced by \mbox{Falconer} and Liu \cite{FL12}. The following theorem shows that the weighted MsLM are H\"{o}lder continuous of exponent $\beta \in (0, \min\{1, 1/b\})$. \begin{theorem} \label{lemma2} Let
\[ Y(t)=\int_0^1 w(x)\mathbf{1}_{[0,\ t]}(x)M_\alpha ( dx),\ \ \ t \in [0,1], \]
be a weighted multistable L\'{e}vy motion, where the function $ w(x), x \in [0, 1],$ is c\`{a}dl\`{a}g. Then $Y(t)$ is stochastic H\"{o}lder continuous of exponent $\beta \in (0, \min\{1, 1/b\}).$ Moreover,
it holds \begin{eqnarray} \label{fskfdf}
\mathbb{P}( |
Y (t)-Y (v)| \geq | t - v |^\beta) \leq C_{a,b}\ | t - v |^{ 1- b\beta},\ \ \ \ \ \ t, v \in [0,1], \end{eqnarray} where $C_{a,b}$ is a constant depending on $a, b, \alpha(\cdot)$ and $w(\cdot).$ In particular, it implies that $L_{I } (u), u \in [0,1],$ is stochastic H\"{o}lder continuous of exponent $\beta \in (0, \min\{1, 1/b\})$. \end{theorem}
\noindent\emph{Proof.} Set $f(t,x)=w(x)\mathbf{1}_{[0, t]}(x), \ t,x \in [0, 1].$ It is easy to see that, for all $v, t \in [0,1]$ such that $v \leq t,$ \begin{eqnarray*}
\int_0^1 \Big|
f(t,s)-f(v,s) \Big|^{\alpha(s)} ds&\leq& \int_0^1 \Big|
w(s)\mathbf{1}_{[v,\ t]}(s) \Big|^{\alpha(s)} ds\\ &\leq& C_\omega \, \int_0^1
\mathbf{1}_{[v,\ t]}(s) \, ds \\ &\leq& C_\omega \, (t-v), \end{eqnarray*}
where $C_\omega = \sup_{z \in [0, 1]}|w(z)|^{\alpha(z)}.$ By Theorem \ref{endthm}, we get (\ref{fskfdf}). This completes the proof of Theorem \ref{lemma2}. \qed
\subsection{Strongly localisability}
When the function $\alpha(x) \in [a, b], x \in [0, 1], $ is continuous, some sufficient conditions such that the multistable integrals
are localisable (or strongly localisable) has been obtained by Falconer and Liu. In the following theorem, we give some new conditions such that localisability can be strengthened to strongly localisability. \begin{theorem}\label{scrend} Assume that $f(t,x)$ and $h(t, x)$ are jointly measurable; and that $f(t,x), h(t, x) \in \mathcal{L}_{\alpha }[0,1]$ for any $t\in [0, 1]$. Assume that $X(t)= \int_0^1 f(t,x)M_\alpha(dx)$ and $X_x'(t)=\int_0^1 h(t,x) M_\alpha(d x)$ are two multistable integrals and have versions in $D[0, 1]$. Suppose that $X(t)$ is $1/\alpha(x)-$localisable at $x$ with local form $X_x'(t).$ If there exist two constants $\eta > 1 $ and $C>0$ such that \begin{eqnarray}\label{vbsds}
\int_0^1 \Bigg| \frac{
f(x+rt,s)-f(x+rv,s) }{ r^{1/\alpha(x)} } \Bigg|^{\alpha(s)} ds
&\leq& C \Big| t - v \Big|^\eta,\ \ \ \ \ \ \ t, v \in [0, 1], \end{eqnarray} for all sufficiently small $r>0,$ then $X(t)$ is strongly localisable at all $x \in [0, 1].$ Moreover, if $X(t)$ has independent increments and $(\ref{vbsds})$ holds for a constant $\eta > 1/2$, then the claim holds also. \end{theorem}
Notice that condition (\ref{vbsds}) is slightly more general than the condition of Falconer and Liu (cf.\ Theorem 3.2 of \cite{FL12}): there exist two constants $\eta > 1/a $ and $C>0$ such that \begin{eqnarray}\label{vbsds1}
\Bigg|\Bigg| \frac{
f(x+rt,\cdot)-f(x+rv,\cdot) }{ r^{1/\alpha(x)} } \Bigg|\Bigg|_{\alpha }
&\leq& C \Big| t - v \Big|^\eta,\ \ \ \ \ \ \ t, v \in [0, 1], \end{eqnarray} for all sufficiently small $r>0.$
\noindent\emph{Proof.} For any $x \in [0, \, 1),$ define $$X_r(u)=\frac{X(x+ru )-X(x )}{ r^{1/\alpha(x)} } \,,\ \ \ \ \ \ r, u \in (0, \, 1].$$ By Theorem 15.6 of \mbox{Billingsley \cite{B68}}, it suffices to show that, for some $\beta> 1$ and $\tau\geq 0$, \begin{eqnarray}\label{ineq38}
\mathbb{P}\Big( \Big| X_r(u)-X_r(u_1)\Big|\geq \lambda,\ \Big|X_r(u_2)-X_r(u)\Big|\geq \lambda \Big) \leq \frac{C}{\lambda^{ \tau } } \Big[ u_2 - u_1 \Big]^{ \beta} \end{eqnarray} for $u_1 \leq u \leq u_2, \lambda> 0$ and $ r \in (0, 1]$, where $C$ is a positive constant. Since $X_r(u)-X_r(u_1)$ and $X_r(u_2)-X_r(u)$
are symmetric, it follows that \begin{eqnarray*}
&& \mathbb{P}\Bigg( \Big| X_r(u)-X_r(u_1)\Big|\geq \lambda,\ \Big|X_r(u_2)-X_r(u)\Big|\geq \lambda \Bigg) \\ &\leq&4 \, \mathbb{P}\Bigg( X_r(u)-X_r(u_1) + \Big( X_r(u_2)-X_r(u) \Big)\geq 2\lambda \Bigg) \\ &=&4 \, \mathbb{P}\Big( X_r(u_2)-X_r(u_1) \geq 2\lambda \Big). \end{eqnarray*} By the Billingsley inequality (cf. p.\ 47 of \cite{B68}) and (\ref{vbsds}), we have \begin{eqnarray} \mathbb{P}\Big( X_r(u_2)-X_r(u_1) \geq 2\lambda \Big) &\leq& \lambda \int_{-1/\lambda}^{1/\lambda } \Bigg( 1- \mathbb{E}e^{i\theta \big(X_r(u_2)-X_r(u_1)\big)}\Bigg) d\theta \nonumber \\
&=& \lambda \int_{-1/\lambda}^{1/\lambda } \Bigg( 1- e^{- \int_0^1 \big|\theta \frac{
f(x+ru_2,s)-f(x+ru_1,s) }{ r^{1/\alpha(x)} } \big|^{\alpha(s)} ds }\Bigg) d\theta \nonumber \\
&\leq& \lambda \int_{-1/\lambda}^{1/\lambda } \int_0^1 \Bigg|\theta \frac{
f(x+ru_2,s)-f(x+ru_1,s) }{ r^{1/\alpha(x)} } \Bigg|^{\alpha(s)} ds \, d\theta \nonumber \\
&=& \lambda \int_{-1/\lambda}^{1/\lambda } |\theta|^{\mu} \int_0^1 \Bigg| \frac{
f(x+ru_2,s)-f(x+ru_1,s) }{ r^{1/\alpha(x)} } \Bigg|^{\alpha(s)} ds \, d\theta \nonumber \\ &\leq&\frac{C_1}{\lambda^{\gamma} } \Big[ u_2 - u_1 \Big]^\eta, \label{sfnmbs} \end{eqnarray} where $\mu=a\mathbf{1}_{[1,\, \infty)} (\theta)+ b\mathbf{1}_{(0,\, 1 )}(\theta), \gamma = a\mathbf{1}_{[1,\, \infty)} (\lambda)+ b\mathbf{1}_{(0,\, 1 )}(\lambda)$ and $C_1$ is a positive constant depending only on $a, b$ and $C.$ Thus \begin{eqnarray*}
\mathbb{P}\Big( \Big| X_r(u)-X_r(u_1)\Big|\geq \lambda,\ \Big|X_r(u_2)-X_r(u)\Big|\geq \lambda \Big) &\leq&\frac{4C_1}{\lambda^{\gamma} } \Big[ u_2 - u_1 \Big]^\eta. \end{eqnarray*} Hence, by (\ref{ineq38}), if $\eta>1$, then $X(t)$ is $1/\alpha(x)-$strongly localisable at $x$ with strong local form $X_x'(t)$.
If $X(t)$ has independent increments, then \begin{eqnarray}
&& \mathbb{P}\Big( \Big| X_r(u)-X_r(u_1)\Big|\geq \lambda,\ \Big|X_r(u_2)-X_r(u)\Big|\geq \lambda \Big) \nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ =\ \mathbb{P}\Big( \Big| X_r(u)-X_r(u_1)\Big|\geq \lambda \Big) \, \mathbb{P}\Big( \Big|X_r(u_2)-X_r(u)\Big|\geq \lambda \Big). \end{eqnarray} By an argument similar to (\ref{sfnmbs}), it follows that \begin{eqnarray*}
&& \mathbb{P}\Big( \Big| X_r(u)-X_r(u_1)\Big|\geq \lambda \Big)\ \leq \ \frac{4C_1 }{\lambda^{\gamma} } \Big[ u - u_1 \Big]^\eta \end{eqnarray*} and \begin{eqnarray*}
&& \mathbb{P}\Big( \Big|X_r(u_2)-X_r(u)\Big|\geq \lambda \Big) \ \leq \ \frac{4C_1 }{\lambda^{\gamma} } \Big[ u_2 - u \Big]^\eta . \end{eqnarray*} Using the inequality $xy \leq (x+y)^2/4,$ $x, y \geq0,$ we have \begin{eqnarray}
\mathbb{P}\Big( \Big| X_r(u)-X_r(u_1)\Big|\geq \lambda,\ \Big|X_r(u_2)-X_r(u)\Big|\geq \lambda \Big) \leq \frac{16C_1^2}{\lambda^{2 \gamma } } \Big[ u_2 - u_1 \Big]^{2\eta}. \end{eqnarray} Thus, if $2\eta> 1,$ by (\ref{ineq38}), then $X(t)$ is $1/\alpha(x)-$strongly localisable at $x$. This completes the proof of the theorem.\qed
As an example to illustrate Theorem \ref{scrend}, consider the weighted MsLM. Falconer and Liu have proved that the weighted MsLM are localisable. The following theorem shows that the weighted MsLM are not only localisable but also strongly localisable. In particular, it shows that the independent-increments MsLM are strongly localisable.
\begin{theorem}\label{themni} Assume that the function $\alpha(u), u \in [0,1],$ satisfies condition
(\ref{coalph}). Let
\[ Y(t)=\int_0^1 w(x)\mathbf{1}_{[0,\ t]}(x)M_\alpha ( dx), \ \ \ \ \ t \in [0, 1], \]
be a weighted multistable L\'{e}vy motion, where the function $ w(x), x \in [0, 1],$ is continuous. Then $Y(t)$ is $1/\alpha(x)-$strongly localisable at all $x\in [0, 1]$ with strong local form $w(x)L_{\alpha (x) }(\cdot)$. In particular, this implies that $L_{I }(t)$ is $1/\alpha(x)-$strongly localisable at all $x\in [0, 1]$ with strong local form $L_{\alpha (x) }(\cdot)$, an $\alpha (x)-$stable L\'{e}vy motion. \end{theorem} \noindent\emph{Proof.} It is known that $Y(t)$ is $1/\alpha(x)-$localisable at all $x$ with strong local form $w(x)L_{\alpha (x) }(\cdot)$; see Falconer and Liu \cite{FL12}. Set $f(t,x)=w(x)\mathbf{1}_{[0,\ t]}(x), \ t,x \in [0, 1].$ By (\ref{sdfdsdf}), the integrand of $Y(t)$ satisfies, for all $t, v \in [0, 1]$ such that $v \leq t,$ \begin{eqnarray}
\int_0^1 \Bigg| \frac{f(x+rt,s) - f(x+rv,s) }{ r^{1/\alpha(x)} } \Bigg|^{\alpha(s)} ds
&=&\int_0^1 \Bigg| \frac{w(s)\mathbf{1}_{[x+rv, \, x+rt]}(s) }{ r^{1/\alpha(x)} } \Bigg|^{\alpha(s)} ds \nonumber \\
&=&\int \Big| w(x+rz) \mathbf{1}_{[v,\ t ] }(z)\Big|^{\alpha (x+rz)} r^{(\alpha (x)-\alpha (x+rz))/\alpha (x)} dz \nonumber \\ &\leq&C_w\int \mathbf{1}_{[v,\ t ] }(z)\, r^{(\alpha (x)-\alpha (x+rz))/\alpha (x)} dz \nonumber \\ &\leq& 2C_w \ (t- v), \nonumber \end{eqnarray} for all sufficiently small $r>0,$
where $s=x+rz$ and $C_\omega = \sup_{z \in [0, 1]}|w(z)|^{\alpha(z)}. $ By the fact that the integrals of functions with disjoint supports are independent, it is easy to see that $Y(t)$ has independent increments, the first claim of the theorem follows by Theorem \ref{scrend}. In particular, since $L_{I }(t) =\int_0^1 \mathbf{1}_{[0,\ t]}(x)M_\alpha (dx),$ the first claim of the theorem implies the second one with $w(x)=1, x \in [0, 1].$ \qed \begin{remark} By inspecting the proof of Falconer and Liu \cite{FL12}, we can see that $Y(t)$ is also $1/\alpha(x)-$localisable at all $x$ with strong local form $w(x)L_{\alpha (x) }(t)$ when the function $ w(x), x \in [0, 1],$ is c\`{a}dl\`{a}g. Hence, Theorem \ref{themni} holds true when the function $ w(x), x \in [0, 1],$ is c\`{a}dl\`{a}g.
\end{remark}
\end{document} | arXiv |
# The Fourier transform and its applications
The Fourier transform is defined as:
$$F(\omega) = \int_{-\infty}^{\infty} f(t) e^{-j\omega t} dt$$
where $f(t)$ is the input signal, $F(\omega)$ is the Fourier transform of the signal, and $\omega$ is the frequency.
Consider a rectangular pulse signal with a width of 1 second and a period of 2 seconds. Calculate the Fourier transform of this signal.
## Exercise
Calculate the Fourier transform of the following signal:
$$f(t) = \begin{cases} 1, & 0 \leq t < 1 \\ 0, & otherwise \end{cases}$$
The Fourier transform has many applications in signal processing, including filtering, modulation, and spectral analysis. We will explore these applications in detail in the following sections.
# Convolution and cross-correlation
Convolution is defined as:
$$(f * g)(t) = \int_{-\infty}^{\infty} f(\tau) g(t - \tau) d\tau$$
where $f(\tau)$ and $g(t - \tau)$ are the input signals.
Consider two signals $f(t) = \delta(t)$ and $g(t) = \delta(t - 1)$. Calculate the convolution of these signals.
## Exercise
Calculate the convolution of the following signals:
$$f(t) = \begin{cases} 1, & 0 \leq t < 1 \\ 0, & otherwise \end{cases}$$
$$g(t) = \begin{cases} 1, & 1 \leq t < 2 \\ 0, & otherwise \end{cases}$$
Cross-correlation is defined as:
$$(f \star g)(t) = \int_{-\infty}^{\infty} f(\tau) g(t + \tau) d\tau$$
where $f(\tau)$ and $g(t + \tau)$ are the input signals.
Consider two signals $f(t) = \delta(t)$ and $g(t) = \delta(t - 1)$. Calculate the cross-correlation of these signals.
## Exercise
Calculate the cross-correlation of the following signals:
$$f(t) = \begin{cases} 1, & 0 \leq t < 1 \\ 0, & otherwise \end{cases}$$
$$g(t) = \begin{cases} 1, & 1 \leq t < 2 \\ 0, & otherwise \end{cases}$$
# The role of filters in signal processing
There are two main types of filters:
1. Low-pass filters: These filters allow low-frequency signals to pass and attenuate high-frequency signals.
2. High-pass filters: These filters allow high-frequency signals to pass and attenuate low-frequency signals.
Consider a low-pass filter with a cutoff frequency of 1 Hz. Calculate the frequency response of this filter.
## Exercise
Design a low-pass filter with a cutoff frequency of 1 Hz and a roll-off rate of 20 dB/decade. Calculate the frequency response of this filter.
# Time-frequency analysis and the sampling theorem
The sampling theorem states that a continuous-time signal can be accurately reconstructed from its sampled version if the sampling rate is at least twice the highest frequency component of the signal.
Consider a signal with a highest frequency component of 10 Hz. Calculate the minimum sampling rate required to accurately reconstruct the signal.
## Exercise
Design a sampling strategy for a signal with a highest frequency component of 10 Hz. Calculate the sampling rate and the sampling period.
# Noise reduction techniques
Filtering is a common noise reduction technique that involves passing the signal through a filter with a specific frequency response. This can be achieved using low-pass, high-pass, or band-pass filters.
Consider a signal with Gaussian white noise added. Calculate the power spectral density of the noisy signal.
## Exercise
Design a filter with a band-pass frequency range of 1 Hz to 10 Hz. Calculate the frequency response of this filter.
# Spectral analysis and frequency domain analysis
Spectral analysis involves analyzing the signal in the frequency domain using the Fourier transform. This can be used to identify the frequency components of a signal and their amplitudes.
Consider a rectangular pulse signal with a width of 1 second and a period of 2 seconds. Calculate the frequency spectrum of this signal.
## Exercise
Calculate the frequency spectrum of the following signal:
$$f(t) = \begin{cases} 1, & 0 \leq t < 1 \\ 0, & otherwise \end{cases}$$
# Applications of signal processing and analysis techniques
Applications of signal processing and analysis techniques include:
- Communication systems: These techniques are used to transmit and receive signals over noisy channels, such as wireless communication systems.
- Image processing: These techniques are used to analyze and process digital images, such as compression, filtering, and feature extraction.
- Audio signal processing: These techniques are used to analyze and process audio signals, such as speech recognition and music analysis.
# Real-world examples and case studies
Examples include:
- Medical imaging: Signal processing and analysis techniques are used to analyze medical images, such as MRI and CT scans, to diagnose diseases and treat patients.
- Speech recognition: Signal processing and analysis techniques are used to analyze and recognize speech signals, such as voice commands and speech-to-text systems.
- Natural language processing: Signal processing and analysis techniques are used to analyze and process natural language data, such as sentiment analysis and machine translation.
## Exercise
Discuss a real-world example or case study that demonstrates the application of signal processing and analysis techniques in a specific field. | Textbooks |
Simulating Correlated Binary and Multinomial Responses with SimCorMultRes
Anestis Touloumis
2 Areas of Applications
3 Simulation Methods
3.1 Correlated nominal responses
3.2 Correlated ordinal responses
3.2.1 Marginal cumulative link model
3.2.2 Marginal continuation-ratio model
3.2.3 Marginal adjacent-category logit model
3.3 Correlated binary responses
3.4 No marginal model specification
4 How to Cite
The R package SimCorMultRes is suitable for simulation of correlated binary responses (exactly two response categories) and of correlated nominal or ordinal multinomial responses (three or more response categories) conditional on a regression model specification for the marginal probabilities of the response categories. This vignette briefly describes the simulation methods proposed by Touloumis (2016) and illustrates the use of the core functions of SimCorMultRes. A more detailed description of SimCorMultRes can be found in Touloumis (2016).
This package was created to facilitate the task of carrying out simulation studies and evaluating the performance of statistical methods for estimating the regression parameters in a marginal model with clustered binary and multinomial responses. Examples of such statistical methods include maximum likelihood methods, copula approaches, quasi-least squares approaches, generalized quasi-likelihood methods and generalized estimating equations (GEE) approaches among others (see references in Touloumis 2016).
In addition, SimCorMultRes can generate correlated binary and multinomial random variables conditional on a desired dependence structure and known marginal probabilities even if these are not determined by a regression model (see third example in Touloumis 2016) or to explore approximations of association measures for discrete variables that arise as realizations of an underlying continuum (see second example in Touloumis 2016).
Let \(Y_{it}\) be the binary or multinomial response for subject \(i\) (\(i=1,\ldots,N\)) at measurement occasion \(t\) (\(t=1,\ldots,T\)), and let \(\mathbf {x}_{it}\) be the associated covariates vector. We assume that \(Y_{it} \in \{0,1\}\) for binary responses and \(Y_{it} \in \{1,2,\ldots,J\geq 3\}\) for multinomial responses.
The function rmult.bcl simulates nominal responses under the marginal baseline-category logit model \[\begin{equation} \log \left[\frac{\Pr(Y_{it}=j |\mathbf {x}_{it})}{\Pr(Y_{it}=J |\mathbf {x}_{it})}\right]=(\beta_{tj0}-\beta_{tJ0})+(\boldsymbol {\beta}_{tj}-\boldsymbol{\beta}_{tJ})^{\prime} \mathbf {x}_{it}=\beta^{\ast}_{tj0}+\boldsymbol{\beta}^{\ast\prime}_{tj}\mathbf {x}_{it}, \tag{3.1} \end{equation}\] where \(\beta_{tj0}\) is the \(j\)-th category-specific intercept at measurement occasion \(t\) and \(\boldsymbol{\beta}_{tj}\) is the \(j\)-th category-specific parameter vector associated with the covariates at measurement occasion \(t\). The popular identifiability constraints \(\beta_{tJ0}=0\) and \(\boldsymbol{\beta}_{tJ}=\mathbf {0}\) for all \(t\), imply that \(\beta^{\ast}_{tj0}=\beta_{tj0}\) and \(\boldsymbol {\beta}^{\ast}_{tj}=\boldsymbol{\beta}_{tj}\) for all \(t=1,\ldots,T\) and \(j=1,\ldots,J-1\). The threshold \[Y_{it}=j \Leftrightarrow U^{NO}_{itj}=\max \{U^{NO}_{it1},\ldots,U^{NO}_{itJ}\}\] generates clustered nominal responses that satisfy the marginal baseline-category logit model (3.1), where \[U^{NO}_{itj}=\beta_{tj0}+\boldsymbol{\beta}_{tj}^{\prime} \mathbf {x}_{it}+e^{NO}_{itj},\] and where the random variables \(\{e^{NO}_{itj}:i=1,\ldots,N \text{, } t=1,\ldots,T \text{ and } j=1,\ldots,J\}\) satisfy the following conditions:
\(e^{NO}_{itj}\) follows the standard extreme value distribution for all \(i\), \(t\) and \(j\) (mean \(=\gamma \approx 0.5772\), where \(\gamma\) is Euler's constant, and variance \(=\pi^2/6\)).
\(e^{NO}_{i_1t_1j_1}\) and \(e^{NO}_{i_2t_2j_2}\) are independent random variables provided that \(i_1 \neq i_2\).
\(e^{NO}_{itj_1}\) and \(e^{NO}_{itj_2}\) are independent random variables provided that \(j_1\neq j_2\).
For each subject \(i\), the association structure among the clustered nominal responses \(\{Y_{it}:t=1,\ldots,T\}\) depends on the joint distribution and correlation matrix of \(\{e^{NO}_{itj}:t=1,\ldots,T \text{ and } j=1,\ldots,J\}\). If the random variables \(\{e^{NO}_{itj}:t=1,\ldots,T \text{ and } j=1,\ldots,J\}\) are independent then so are \(\{Y_{it}:t=1,\ldots,T\}\).
Example 3.1 (Simulation of clustered nominal responses using the NORTA method) Suppose the aim is to simulate nominal responses from the marginal baseline-category logit model \[\begin{equation*} \log \left[\frac{\Pr(Y_{it}=j |\mathbf {x}_{it})}{\Pr(Y_{it}=4 |\mathbf {x}_{it})}\right]=\beta_{j0}+ \beta_{j1} {x}_{i1}+ \beta_{j2} {x}_{it2} \end{equation*}\] where \(N=500\), \(T=3\), \((\beta_{10},\beta_{11},\beta_{12},\beta_{20},\beta_{21},\beta_{22},\beta_{30},\beta_{31},\beta_{32})=(1, 3, 2, 1.25, 3.25, 1.75, 0.75, 2.75, 2.25)\) and \(\mathbf {x}_{it}=(x_{i1},x_{it2})^{\prime}\) for all \(i\) and \(t\), with \(x_{i1}\overset{iid}{\sim} N(0,1)\) and \(x_{it2}\overset{iid}{\sim} N(0,1)\). For the dependence structure, suppose that the correlation matrix \(\mathbf{R}\) in the NORTA method has elements \[ \mathbf{R}_{t_1j_1,t_2j_2}=\begin{cases} 1 & \text{if } t_1=t_2 \text{ and } j_1=j_2\\ 0.95 & \text{if } t_1 \neq t_2 \text{ and } j_1=j_2\\ 0 & \text{otherwise }\\ \end{cases} \] for all \(i=1,\ldots,500\).
# parameter vector
betas <- c(1, 3, 2, 1.25, 3.25, 1.75, 0.75, 2.75, 2.25, 0, 0, 0)
# sample size
sample_size <- 500
# number of nominal response categories
categories_no <- 4
# cluster size
cluster_size <- 3
set.seed(1)
# time-stationary covariate x_{i1}
x1 <- rep(rnorm(sample_size), each = cluster_size)
# time-varying covariate x_{it2}
x2 <- rnorm(sample_size * cluster_size)
# create covariates dataframe
xdata <- data.frame(x1, x2)
set.seed(321)
library("SimCorMultRes")
# latent correlation matrix for the NORTA method
equicorrelation_matrix <- toeplitz(c(1, rep(0.95, cluster_size - 1)))
identity_matrix <- diag(categories_no)
latent_correlation_matrix <- kronecker(equicorrelation_matrix, identity_matrix)
# simulation of clustered nominal responses
simulated_nominal_dataset <- rmult.bcl(clsize = cluster_size, ncategories = categories_no,
betas = betas, xformula = ~x1 + x2, xdata = xdata, cor.matrix = latent_correlation_matrix)
suppressPackageStartupMessages(library("multgee"))
# fitting a GEE model
nominal_gee_model <- nomLORgee(y ~ x1 + x2, data = simulated_nominal_dataset$simdata,
id = id, repeated = time, LORstr = "time.exch")
# checking regression coefficients
round(coef(nominal_gee_model), 2)
#> beta10 x1:1 x2:1 beta20 x1:2 x2:2 beta30 x1:3 x2:3
#> 1.07 3.18 1.99 1.35 3.40 1.70 0.89 3.06 2.22
Simulation of clustered ordinal responses is feasible under either a marginal cumulative link model or a marginal continuation-ratio model.
The function rmult.clm simulates ordinal responses under the marginal cumulative link model \[\begin{equation} \Pr(Y_{it}\le j |\mathbf {x}_{it})=F(\beta_{tj0} +\boldsymbol {\beta}^{\prime}_t \mathbf {x}_{it}) \tag{3.2} \end{equation}\] where \(F\) is a cumulative distribution function (cdf), \(\beta_{tj0}\) is the \(j\)-th category-specific intercept at measurement occasion \(t\) and \(\boldsymbol \beta_t\) is the regression parameter vector associated with the covariates at measurement occasion \(t\). The category-specific intercepts at each measurement occasion \(t\) are assumed to be monotone increasing, that is \[-\infty=\beta_{t00} <\beta_{t10} < \beta_{t20} < \cdots < \beta_{t(J-1)0}< \beta_{tJ0}=\infty\] for all \(t\). Using the threshold \[Y_{it}=j \Leftrightarrow \beta_{t(j-1)0} < U^{O1}_{it} \leq \beta_{tj0}\] clustered ordinal responses that satisfy the marginal cumulative link model (3.2) are generated, where \[U^{O1}_{it}=-\boldsymbol{\beta}^{\prime}_t \mathbf {x}_{it}+e^{O1}_{it},\] and where \(\{e^{O1}_{it}:i=1,\ldots,N \text{ and } t=1,\ldots,T\}\) are random variables such that:
\(e^{O1}_{it} \sim F\) for all \(i\) and \(t\).
\(e^{O1}_{i_1t_1}\) and \(e^{O1}_{i_2t_2}\) are independent random variables provided that \(i_1 \neq i_2\).
For each subject \(i\), the association structure among the clustered ordinal responses \(\{Y_{it}:t=1,\ldots,T\}\) depends on the pairwise bivariate distributions and correlation matrix of \(\{e^{O1}_{it}:t=1,\ldots,T\}\). If the random variables \(\{e^{O1}_{it}:t=1,\ldots,T\}\) are independent then so are \(\{Y_{it}:t=1,\ldots,T\}\).
Example 3.2 (Simulation of clustered ordinal responses conditional on a marginal cumulative probit model with time-varying regression parameters) Suppose the goal is to simulate correlated ordinal responses from the marginal cumulative probit model \[\begin{equation*} \Pr(Y_{it}\le j |\mathbf x_{it})=\Phi(\beta_{j0} + \beta_{t1} {x}_{i}) \end{equation*}\] where \(\Phi\) denotes the cdf of the standard normal distribution (mean \(=0\) and variance \(=1\)), \(N=500\), \(T=4\), \((\beta_{10},\beta_{20},\beta_{30},\beta_{40})=(-1.5,-0.5,0.5,1.5)\), \((\beta_{11},\beta_{21},\beta_{31},\beta_{41})=(1,2,3,4)\) and \(\mathbf x_{it}=x_{i}\overset{iid}{\sim} N(0,1)\) for all \(i\) and \(t\). For the dependence structure, assume that \(\mathbf{e}_i^{O1}=(e^{O1}_{i1},e^{O1}_{i2},e^{O1}_{i3},e^{O1}_{i4})^{\prime}\) are iid random vectors from a tetra-variate normal distribution with mean vector the zero vector and covariance matrix the correlation matrix \[\left( {\begin{array}{*{20}c} 1.00 & 0.85 & 0.50 & 0.15 \\ 0.85 & 1.00 & 0.85 & 0.50 \\ 0.50 & 0.15 & 1.00 & 0.85 \\ 0.15 & 0.85 & 0.50 & 1.00 \end{array} } \right).\]
set.seed(12345)
# category-specific intercepts
beta_intercepts <- c(-1.5, -0.5, 0.5, 1.5)
# time-varying regression parameters associated with covariates
beta_coefficients <- matrix(c(1, 2, 3, 4), 4, 1)
# time-stationary covariate
x <- rep(rnorm(sample_size), each = cluster_size)
latent_correlation_matrix <- toeplitz(c(1, 0.85, 0.5, 0.15))
# simulation of ordinal responses
simulated_ordinal_dataset <- rmult.clm(clsize = cluster_size, intercepts = beta_intercepts,
betas = beta_coefficients, xformula = ~x, cor.matrix = latent_correlation_matrix,
link = "probit")
# first eight rows of the simulated dataframe
head(simulated_ordinal_dataset$simdata, n = 8)
#> y x id time
#> 1 1 0.5855288 1 1
The function rmult.crm simulates clustered ordinal responses under the marginal continuation-ratio model \[\begin{equation} \Pr(Y_{it}=j |Y_{it} \ge j,\mathbf {x}_{it})=F(\beta_{tj0} +\boldsymbol {\beta}^{'}_t \mathbf {x}_{it}) \tag{3.3} \end{equation}\] where \(\beta_{tj0}\) is the \(j\)-th category-specific intercept at measurement occasion \(t\), \(\boldsymbol \beta_t\) is the regression parameter vector associated with the covariates at measurement occasion \(t\) and \(F\) is a cdf. This is accomplished by utilizing the threshold \[Y_{it}=j, \text{ given } Y_{it} \geq j \Leftrightarrow U^{O2}_{itj} \leq \beta_{tj0}\] where \[U^{O2}_{itj}=-\boldsymbol {\beta}^{\prime}_t \mathbf {x}_{it}+e^{O2}_{itj},\] and where \(\{e^{O2}_{itj}:i=1,\ldots,N \text{ , } t=1,\ldots,T \text{ and } j=1,\ldots,J-1\}\) satisfy the following three conditions:
\(e^{O2}_{itj} \sim F\) for all \(i\), \(t\) and \(j\).
\(e^{O2}_{i_1t_1j_1}\) and \(e^{O2}_{i_2t_2j_2}\) are independent random variables provided that \(i_1 \neq i_2\).
\(e^{O2}_{itj_1}\) and \(e^{O2}_{itj_2}\) are independent random variables provided that \(j_1\neq j_2\).
For each subject \(i\), the association structure among the clustered ordinal responses \(\{Y_{it}:t=1,\ldots,T\}\) depends on the joint distribution and correlation matrix of \(\{e^{O2}_{itj}:j=1,\ldots,J \text{ and } t=1,\ldots,T\}\). If the random variables \(\{e^{O2}_{itj}:j=1,\ldots,J \text{ and } t=1,\ldots,T\}\) are independent then so are \(\{Y_{it}:t=1,\ldots,T\}\).
Example 3.3 (Simulation of clustered ordinal responses conditional on a marginal continuation-ratio probit model) Suppose simulation of clustered ordinal responses under the marginal continuation-ratio probit model \[\begin{equation*} \Pr(Y_{it}=j |Y_{it} \ge j,\mathbf{x}_{it})=\Phi(\beta_{j0} + \beta {x}_{it}) \end{equation*}\] with \(N=500\), \(T=4\), \((\beta_{10},\beta_{20},\beta_{30},\beta_{40},\beta)=(-1.5,-0.5,0.5,1.5,1)\) and \(\mathbf{x}_{it}=x_{it}\overset{iid}{\sim} N(0,1)\) for all \(i\) and \(t\) is desired. For the dependence structure, assume that \(\left\{\mathbf{e}_i^{O2}=\left(e^{O2}_{i11},\ldots,e^{O1}_{i44}\right)^{\prime}:i=1,\ldots,N\right\}\) are iid random vectors from a multivariate normal distribution with mean vector the zero vector and covariance matrix the \(16 \times 16\) correlation matrix with elements\[ \text{corr}(e^{O2}_{it_1j_1},e^{O2}_{it_2j_2}) = \begin{cases} 1 & \text{for } j_1 = j_2 \text{ and } t_1 = t_2\\ 0.24 & \text{for } t_1 \neq t_2\\ 0 & \text{otherwise.}\\ \end{cases} \]
# regression parameters associated with covariates
beta_coefficients <- 1
# time-varying covariate
x <- rnorm(sample_size * cluster_size)
# number of ordinal response categories
# correlation matrix for the NORTA method
latent_correlation_matrix <- diag(1, (categories_no - 1) * cluster_size) + kronecker(toeplitz(c(0,
rep(0.24, categories_no - 2))), matrix(1, cluster_size, cluster_size))
simulated_ordinal_dataset <- rmult.crm(clsize = cluster_size, intercepts = beta_intercepts,
# first six clusters with ordinal responses
head(simulated_ordinal_dataset$Ysim)
#> t=1 t=2 t=3 t=4
#> i=1 2 1 3 1
The function rmult.acl simulates clustered ordinal responses under the marginal adjacent-category logit model \[\begin{equation} \log\left[\frac{\Pr(Y_{it}=j |\mathbf {x}_{it})}{\Pr(Y_{it}=j+1 |\mathbf {x}_{it})}\right]=\beta_{tj0} +\boldsymbol {\beta}^{'}_t \mathbf {x}_{it} \tag{3.4} \end{equation}\] where \(\beta_{tj0}\) is the \(j\)-th category-specific intercept at measurement occasion \(t\), \(\boldsymbol \beta_t\) is the regression parameter vector associated with the covariates at measurement occasion \(t\).
Generation of clustered ordinal responses relies upon utilizing the connection between baseline-category logit models and adjacent-category logit models. In particular, the threshold \[Y_{it}=j \Leftrightarrow U^{O3}_{itj}=\max \{U^{O3}_{it1},\ldots,U^{O3}_{itJ}\}\] generates clustered nominal responses that satisfy the marginal adjacent-category logit model (3.4), where \[U^{O3}_{itj}=\sum_{k=j}^J\beta_{tk0}+(J-j)\boldsymbol{\beta}_{t}^{\prime} \mathbf {x}_{it}+e^{O3}_{itj},\] and where the random variables \(\{e^{O3}_{itj}:i=1,\ldots,N \text{, } t=1,\ldots,T \text{ and } j=1,\ldots,J\}\) satisfy the following conditions:
\(e^{O3}_{itj}\) follows the standard extreme value distribution for all \(i\), \(t\) and \(j\).
For each subject \(i\), the association structure among the clustered ordinal responses \(\{Y_{it}:t=1,\ldots,T\}\) depends on the joint distribution and correlation matrix of \(\{e^{O3}_{itj}:t=1,\ldots,T \text{ and } j=1,\ldots,J\}\). If the random variables \(\{e^{O3}_{itj}:t=1,\ldots,T \text{ and } j=1,\ldots,J\}\) are independent then so are \(\{Y_{it}:t=1,\ldots,T\}\).
Example 3.4 (Simulation of clustered ordinal responses conditional on a marginal adjacent-category logit model using the NORTA method) Suppose the aim is to simulate ordinal responses from the marginal adjacent-category logit model \[\begin{equation*} \log \left[\frac{\Pr(Y_{it}=j |\mathbf {x}_{it})}{\Pr(Y_{it}=j+1 |\mathbf {x}_{it})}\right]=\beta_{j0}+ \beta_{1} {x}_{i1}+ \beta_{2} {x}_{it2} \end{equation*}\] where \(N=500\), \(T=3\), \((\beta_{10},\beta_{20},\beta_{30})=(3, 2, 1)\), \((\beta_{1},\beta_{2})=(1, 1)\) and \(\mathbf {x}_{it}=(x_{i1},x_{it2})^{\prime}\) for all \(i\) and \(t\), with \(x_{i1}\overset{iid}{\sim} N(0,1)\) and \(x_{it2}\overset{iid}{\sim} N(0,1)\). For the dependence structure, suppose that the correlation matrix \(\mathbf{R}\) in the NORTA method has elements \[ \mathbf{R}_{t_1j_1,t_2j_2}=\begin{cases} 1 & \text{if } t_1=t_2 \text{ and } j_1=j_2\\ 0.95 & \text{if } t_1 \neq t_2 \text{ and } j_1=j_2\\ 0 & \text{otherwise }\\ \end{cases} \] for all \(i=1,\ldots,500\).
# intercepts
beta_intercepts <- c(3, 2, 1)
beta_coefficients <- c(1, 1)
identity_matrix <- diag(4)
# simulation of clustered ordinal responses
simulated_ordinal_dataset <- rmult.acl(clsize = cluster_size, intercepts = beta_intercepts,
betas = beta_coefficients, xformula = ~x1 + x2, xdata = xdata, cor.matrix = latent_correlation_matrix)
ordinal_gee_model <- ordLORgee(y ~ x1 + x2, data = simulated_ordinal_dataset$simdata,
id = id, repeated = time, LORstr = "time.exch", link = "acl")
round(coef(ordinal_gee_model), 2)
#> beta10 beta20 beta30 x1 x2
#> 2.95 1.97 1.67 1.14 1.00
The function rbin simulates binary responses under the marginal model specification \[\begin{equation} \Pr(Y_{it}=1 |\mathbf {x}_{it})=F(\beta_{t0} +\boldsymbol {\beta}^{\prime}_{t} \mathbf {x}_{it}) \tag{3.5} \end{equation}\] where \(\beta_{t0}\) is the intercept at measurement occasion \(t\), \(\boldsymbol \beta_t\) is the regression parameter vector associated with the covariates at measurement occasion \(t\) and \(F\) is a cdf. The threshold \[Y_{it}=1 \Leftrightarrow U^{B}_{it} \leq \beta_{t0} + 2 \boldsymbol {\beta}^{\prime}_t \mathbf {x}_{it},\] generates clustered binary responses that satisfy the marginal model (3.5), where \[\begin{equation} U^{B}_{it}=\boldsymbol {\beta}^{\prime}_t \mathbf {x}_{it}+e^{B}_{it}, \tag{3.6} \end{equation}\] and where \(\{e^{B}_{it}:i=1,\ldots,N \text{ and } t=1,\ldots,T\}\) are random variables such that:
\(e^{B}_{it} \sim F\) for all \(i\) and \(t\).
\(e^{B}_{i_1t_1}\) and \(e^{B}_{i_2t_2}\) are independent random variables provided that \(i_1 \neq i_2\).
For each subject \(i\), the association structure among the clustered binary responses \(\{Y_{it}:t=1,\ldots,T\}\) depends on the pairwise bivariate distributions and correlation matrix of \(\{e^{B}_{it}:t=1,\ldots,T\}\). If the random variables \(\{e^{B}_{it}:t=1,\ldots,T\}\) are independent then so are \(\{Y_{it}:t=1,\ldots,T\}\).
Example 3.5 (Simulation of clustered binary responses conditional on a marginal probit model using NORTA method) Suppose the goal is to simulate clustered binary responses from the marginal probit model \[\Pr(Y_{it}=1 |\mathbf{x}_{it})=\Phi(0.2x_i)\] where \(N=100\), \(T=4\) and \(\mathbf{x}_{it}=x_i\overset{iid}{\sim} N(0,1)\) for all \(i\) and \(t\). For the association structure, assume that the random variables \(\mathbf{e}_i^{B}=(e^{B}_{i1},e^{B}_{i2},e^{B}_{i3},e^{B}_{i4})^{\prime}\) in (3.6) are iid random vectors from the tetra-variate normal distribution with mean vector the zero vector and covariance matrix the correlation matrix \(\mathbf{R}\) given by \[\begin{equation} \mathbf{R}=\left( {\begin{array}{*{20}c} 1.00 & 0.90 & 0.90 & 0.90 \\ 0.90 & 1.00 & 0.90 & 0.90 \\ 0.90 & 0.90 & 1.00 & 0.90 \\ 0.90 & 0.90 & 0.90 & 1.00 \end{array} } \right). \tag{3.7} \end{equation}\] This association configuration defines an exchangeable correlation matrix for the clustered binary responses, i.e. \(\text{corr}(Y_{it_1},Y_{it_2})=\rho_i\) for all \(i\) and \(t\). The strength of the correlation (\(\rho_i\)) is decreasing as the absolute value of the time-stationary covariate \(x_i\) increases. For example, \(\rho_i=0.7128\) when \(x_{i}=0\) and \(\rho_i=0.7\) when \(x_i=3\) or \(x_i=-3\). Therefore, a strong exchangeable correlation pattern for each subject that does not differ much across subjects is implied with this configuration.
# intercept
beta_intercepts <- 0
# regression parameter associated with the covariate
beta_coefficients <- 0.2
latent_correlation_matrix <- toeplitz(c(1, 0.9, 0.9, 0.9))
# simulation of clustered binary responses
simulated_binary_dataset <- rbin(clsize = cluster_size, intercepts = beta_intercepts,
library("gee")
binary_gee_model <- gee(y ~ x, family = binomial("probit"), id = id, data = simulated_binary_dataset$simdata)
#> Beginning Cgee S-function, @(#) geeformula.q 4.13 98/01/27
#> running glm to get initial regression estimate
#> (Intercept) x
#> 0.1315121 0.2826005
# checking the estimated coefficients
summary(binary_gee_model)$coefficients
#> Estimate Naive S.E. Naive z Robust S.E. Robust z
#> (Intercept) 0.1315121 0.06399465 2.055048 0.1106696 1.188331
#> x 0.2826006 0.07191931 3.929412 0.1270285 2.224703
Example 3.6 (Simulation of clustered binary responses under a conditional marginal logit model without utilizing the NORTA method) Consider now simulation of correlated binary responses from the marginal logit model \[\begin{equation*} \Pr(Y_{it}=1 |\mathbf{x}_{it})=F(0.2x_i) \end{equation*}\] where \(F\) is the cdf of the standard logistic distribution (mean \(=0\) and variance \(=\pi^2/3\)), \(N=100\), \(T=4\) and \(\mathbf{x}_{it}=x_i\overset{iid}{\sim} N(0,1)\) for all \(i\) and \(t\). This is similar to the marginal model configuration in Example 3.5 except from the link function. For the dependence structure, assume that the correlation matrix of \(\mathbf{e}_i^{B}=(e^{B}_{i1},e^{B}_{i2},e^{B}_{i3},e^{B}_{i4})^{\prime}\) in (3.6) is equal to the correlation matrix \(\mathbf{R}\) defined in (3.7). To simulate \(\mathbf{e}_i^{B}\) without utilizing the NORTA method, one can employ the tetra-variate extreme value distribution (Gumbel 1958). In particular, this is accomplished by setting \(\mathbf{e}_i^{B}=\mathbf{U}_i-\mathbf{V}_i\) for all \(i\), where \(\mathbf{U}_i\) and \(\mathbf{V}_i\) are independent random vectors from the tetra-variate extreme value distribution with dependence parameter equal to \(0.9\), that is \[\Pr\left(U_{i1}\leq u_{i1},U_{i2}\leq u_{i2},U_{i3}\leq u_{i3},U_{i4}\leq u_{i4}\right)=\exp\left\{-\left[\sum_{t=1}^4 \exp{\left(-\frac{u_{it}}{0.9}\right)}\right]^{0.9}\right\}\] and \[\Pr\left(V_{i1}\leq v_{i1},V_{i2}\leq v_{i2},V_{i3}\leq v_{i3},V_{i4}\leq v_{i4}\right)=\exp\left\{-\left[\sum_{t=1}^4 \exp{\left(-\frac{v_{it}}{0.9}\right)}\right]^{0.9}\right\}.\] It follows that \(e_{it}^{B}\sim F\) for all \(i\) and \(t\) and \(\textrm{corr}(\mathbf{e}_i^{B})=\mathbf{R}\) for all \(i\).
# simulation of epsilon variables
library(evd)
simulated_latent_variables1 <- rmvevd(sample_size, dep = sqrt(1 - 0.9), model = "log",
d = cluster_size)
simulated_latent_variables <- simulated_latent_variables1 - simulated_latent_variables2
betas = beta_coefficients, xformula = ~x, rlatent = simulated_latent_variables)
binary_gee_model <- gee(y ~ x, family = binomial("logit"), id = id, data = simulated_binary_dataset$simdata)
#> 0.04146261 0.09562709
#> Estimate Naive S.E. Naive z Robust S.E. Robust z
#> (Intercept) 0.04146261 0.1008516 0.4111249 0.1790511 0.2315686
#> x 0.09562709 0.1107159 0.8637160 0.1949327 0.4905647
To achieve simulation of clustered binary, ordinal and nominal responses under no marginal model specification, perform the following intercepts:
Based on the marginal probabilities calculate the intercept of a marginal probit model for binary responses (see Example 3.7) or the category-specific intercepts of a cumulative probit model (see third example in Touloumis 2016) or of a baseline-category logit model for multinomial responses (see Example 3.8).
Create a pseudo-covariate say x of length equal to the number of cluster size (clsize) times the desired number of clusters of simulated responses (say R), that is x = clsize * R. This step is required in order to identify the desired number of clustered responses.
Set betas = 0 in the core functions rbin (see Example 3.7) or rmult.clm, or set 0 all values of the beta argument that correspond to the category-specific parameters in the core function rmult.bcl (see Example 3.8).
set xformula = ~ x.
Run the core function to obtain realizations of the simulated clustered responses.
Example 3.7 (Simulation of clustered binary responses without covariates) Suppose the goal is to simulate \(5000\) clustered binary responses with \(\Pr(Y_{t}=1)=0.8\) for all \(t=1,\ldots,4\). For simplicity, assume that the clustered binary responses are independent.
sample_size <- 5000
beta_intercepts <- qnorm(0.8)
# pseudo-covariate
x <- rep(0, each = cluster_size * sample_size)
latent_correlation_matrix <- diag(cluster_size)
# simulated marginal probabilities
colMeans(simulated_binary_dataset$Ysim)
#> t=1 t=2 t=3 t=4
#> 0.8024 0.7972 0.7948 0.8088
Example 3.8 (Simulation of clustered nominal responses without covariates) Suppose the aim is to simulate \(N=5000\) clustered nominal responses with \(\Pr(Y_{t}=1)=0.1\), \(\Pr(Y_{t}=2)=0.2\), \(\Pr(Y_{t}=3)=0.3\) and \(\Pr(Y_{t}=4)=0.4\), for all \(i\) and \(t=1,\ldots,3\). For the sake of simplicity, we assume that the clustered responses are independent.
betas <- c(log(0.1/0.4), 0, log(0.2/0.4), 0, log(0.3/0.4), 0, 0, 0)
latent_correlation_matrix <- kronecker(diag(cluster_size), diag(categories_no))
betas = betas, xformula = ~x, cor.matrix = latent_correlation_matrix)
apply(simulated_nominal_dataset$Ysim, 2, table)/sample_size
#> t=1 t=2 t=3
#> 1 0.1000 0.0996 0.1036
citation("SimCorMultRes")
To cite R package SimCorMultRes in publications, please use:
Touloumis, A. (2016). Simulating Correlated Binary and Multinomial
Responses under Marginal Model Specification: The SimCorMultRes
Package. The R Journal 8:2, 79-91.
A BibTeX entry for LaTeX users is
@Article{,
title = {Simulating Correlated Binary and Multinomial Responses under
Marginal Model Specification: The SimCorMultRes Package},
author = {Anestis Touloumis},
journal = {The R Journal},
pages = {79-91},
url = {https://journal.r-project.org/archive/2016/RJ-2016-034/index.html},
Gumbel, E. J. 1958. Statistics of Extremes. Columbia University Press, New York.
Touloumis, A. 2016. "Simulating Correlated Binary and Multinomial Responses under Marginal Model Specification: The SimCorMultRes Package." The R Journal 8 (2): 79–91. https://journal.r-project.org/archive/2016/RJ-2016-034/index.html. | CommonCrawl |
\begin{definition}[Definition:RSA 129]
'''RSA $129$''' is the name given to the semiprime:
:$114 \, 381 \, 625 \, 757 \, 888 \, 867 \, 669 \, 235 \, 779 \, 976 \, 146 \, 612 \, 010 \, 218 \, 296 \, 721 \, 242 \, 362 \, 562 \, 561 \, 842 \, 935 \, 706 \, 935 \, 245 \, 733 \, 897 \, 830 \, 597 \, 123 \, 563 \, 958 \, 705 \, 058 \, 989 \, 075 \, 147 \, 599 \, 290 \, 026 \, 879 \, 543 \, 541$
Its factors are:
:$3 \, 490 \, 529 \, 510 \, 847 \, 650 \, 949 \, 147 \, 849 \, 619 \, 903 \, 898 \, 133 \, 417 \, 764 \, 638 \, 493 \, 387 \, 843 \, 990 \, 820 \, 577$
and:
:$32 \, 769 \, 132 \, 993 \, 266 \, 709 \, 549 \, 961 \, 988 \, 190 \, 834 \, 461 \, 413 \, 177 \, 642 \, 967 \, 992 \, 942 \, 539 \, 798 \, 288 \, 533$
\end{definition} | ProofWiki |
Dirichlet's divisor problem via Lambert series
In Über die Bestimmung asymptotischer Gesetze in der Zahlentheorie, Dirichlet proved his theorem on the asymptotic behaviour of the divisor function using a Lambert series: let $d_n = d(n)$ denote the number of the divisors of $n$; then Lambert (actually this is due to Euler) observed that $$ f(z) = \sum_{n=1}^\infty d_n z^n = \sum_{n=1}^\infty \frac{z^n}{1-z^n} . $$ This series converges for $|z| < 1$, and diverges for $z = 1$.
Setting $z = e^{-t}$ we obtain $$ g(t) = \sum_{n=1}^\infty \frac{e^{-nt}}{1-e^{-nt}} = \sum_{n=1}^\infty \frac{1}{e^{nt}-1} . $$
Dirichlet writes that "expressing this series by a definite integral one easily finds" that $$ g(t) \sim \frac1t \log \frac1t + \frac{\gamma}t $$ as $t \to 0$, where $\gamma$ is Euler's constant.
Dirichlet then claims that the asymptotic behaviour of $g(t)$ would imply that $d_n$ is, on the average, equal to $\log n + 2 \gamma$, which in turn implies that $b_1 + b_2 + \ldots + b_n \approx (n + \frac12) \log n + (2\gamma+1)n. $ He mentions that he has used the integral expressions for $\Gamma(k)$ and its derivative $\Gamma'(k)$ for deriving the first property.
Knopp (Über Lambertsche Reihen, J. Reine Angew. Math. 142) claims that Dirichlet's proof was "heuristic". I find that hard to believe, and I am convinced that Dirichlet's sketch can be turned into a valid proof by someone who knows the tools of the trade. So here are my questions:
How did Dirichlet express "this series by a definite integral" and derive the asymptotic expression for $g(t)$?
Let me remark that Endres and Steiner (A new proof of the Voronoi summation formula) use Voronoi summation for proving the sharper estimate $$ g(t) \sim \frac1t \log \frac1t + \frac{\gamma}t + \frac14 + O(t) $$ as $t \to 0$. But this is not "easily found".
How did Dirichlet transform his knowledge about the asymptotic behaviour of $\sum b_n e^{-nt}$ as $t \to 0$ into an average behaviour of $b_n$? This smells like a Tauberian result, but I'm not fluent enough in analytic number theory to see how easy this is.
nt.number-theory analytic-number-theory
Franz LemmermeyerFranz Lemmermeyer
For part 1 of the question, he would most likely have used the Euler-Maclaurin summation formula
$$ \sum_{n=1}^{\infty}\frac{1}{e^{nt} - 1} = \int_{1}^{\infty}\frac{dx}{e^{xt} - 1} + \frac{1}{2}\frac{1}{e^t - 1} + \int_{1}^{\infty}S(x)\left(\frac{d}{dx}\frac{1}{e^{xt} - 1}\right)dx $$
with $S(x)$ the sawtooth function. It is easy to obtain the leading term, because it comes from the first integral
$$ \int_{1}^{\infty}\frac{dx}{e^{xt} - 1} = \frac{1}{t}\int_{t}^{\infty}\frac{du}{e^u - 1} $$
by the change of variable $u = xt$. We have
$$ \int_{t}^{\infty}\frac{du}{e^u - 1} = \int_{t}^{1}\frac{du}{e^u - 1} + \int_{1}^{\infty}\frac{du}{e^u - 1}, $$ and $$ \frac{1}{e^u - 1} = \frac{1}{u} + \left(\frac{1}{e^u - 1} - \frac{1}{u}\right) $$ on $0 \leq u \leq 1$, so that $$ g(t) = \frac{1}{t}\log\left(\frac{1}{t}\right) + O\left(\frac{1}{t}\right). $$ But to get the second term looks harder, for the integral with the sawtooth function contributes to that term. To go further, one can integrate by parts in that integral, which is the standard approach, or write it as a sum of integrals over the intervals from $n$ to $n+1$. Also the sawtooth function has a simple Fourier expansion, which may help. I should remark that the integral with the sawtooth function is $O(1/t)$ as one sees when bounding it by passing the absolute value under the integral sign and using $|S(x)| \leq 1/2$. Anyway, I am pretty sure that part 1 is doable with some work.
Part 2 looks trickier. The Lambert series expansion
$$ \sum_{n=1}^{\infty}(1 + \mu(n))e^{-nt} = \frac{e^{-t}}{1 - e^{-t}} + e^{-t} = \frac{1}{t} + \frac{1}{2} + O(|t|) $$
is a little nicer than the one for the divisor function; not only are the coefficients nonnegative, but they are also bounded. Supposing that we have a Tauberian theorem strong enough to yield
$$ \sum_{n \leq x}(1 + \mu(n)) \sim x, $$
we would then have proved the Prime Number Theorem from the Lambert series. It seems a little unlikely that Dirichlet had such a strong Tauberian theorem; would he not have proved the Prime Number Theorem if he had? Of course, this argument by analogy is not conclusive, since the two situations differ by a factor of $\log(x)$.
We shall never know what argument Dirichlet had, and he may have found an approach that did not use a Tauberian theorem, perhaps exploiting special properties of the divisor function. It is worth noting that Voronoi's first proof of the error term $O(x^{1/3}\log(x))$ for the divisor problem was based on the Euler-Maclaurin summation formula.
Marius OverholtMarius Overholt
$\begingroup$ In his articles, Dirichlet claimed to have proved the prime number theorem. $\endgroup$ – Franz Lemmermeyer Nov 9 '10 at 5:58
$\begingroup$ @Franz Lemmermeyer may you give an explicit reference of Dirichlet claim, please. $\endgroup$ – juan Jul 23 '16 at 21:32
$\begingroup$ @Marius Overholt the equality $$\sum_{n=1}^\infty (1+\mu(n))e^{-nt}=\frac{e^{-t}}{1-e^{-t}}+e^{-t}$$ is not true. In what sense do you mean this? $\endgroup$ – juan Jul 28 '16 at 7:59
I'm 99% sure this is what Dirichlet did :
$$g(t) = \sum_{n=1}^\infty d_n e^{-nt}$$ with $\Gamma(s) n^{-s} = \int_0^\infty t^{s-1} e^{-nt} dt$ we get $$\Gamma(s)\zeta(s)^2 = \int_0^\infty t^{s-1} g(t) dt$$
the LHS has a dominating pole at $s = 1$ where $\Gamma(s) \sim 1-\gamma\,(s-1)$ and $\zeta(s) \sim \frac{1}{s-1}+\gamma$ and$$\Gamma(s)\zeta(s)^2 \sim \frac{1}{(s-1)^2} + \frac{\gamma}{s-1}$$
so that $$\Gamma(s)\zeta(s)^2-\frac{1}{(s-1)^2} - \frac{\gamma}{s-1}=\int_0^\infty t^{s-1} (g(t)+\log t\frac{1_{t < 1} }{t}- \gamma\frac{ 1_{t < 1}}{t} )dt$$ is holomorphic on $Re(s) \ge 1$, i.e. as $t \to 0^+$ :
$$g(t) = \frac{- \log t}{t}+\frac{ \gamma}{t} + o(t^{-1})$$
reunsreuns
$\begingroup$ the last one needing a Tauberian theorem (but Dirichlet wasn't very sure about those, probably because of his lack of complex analysis, he even claimed to have proved the prime number theorem, making the same "mistake") $\endgroup$ – reuns Sep 27 '16 at 20:32
Not the answer you're looking for? Browse other questions tagged nt.number-theory analytic-number-theory or ask your own question.
Convergence of Dirichlet series
Is it possible to show that $\sum_{n=1}^{\infty} \frac{\mu(n)}{\sqrt{n}}$ diverges?
Divisor sums over values of binary forms of primes
Dirichlet series of the k-th divisor of n^2
Asymptotic for a number theoretic sequence and its Dirichlet series' convergence
Divergence of a series related to Schinzel's hypothesis H
On the relation between the asymptotics of a Dirichlet series' coefficients and the series' analytic continuability
Convergence of series $\sum_{k=1}^{\infty}\frac{p_{k+1}-p_k}{(p_{k+1}+p_k)^\alpha}$ | CommonCrawl |
\begin{document}
\listoftodos
\todo[inline,color=green]{ToDos for last round of corrections: (done=lightgray)} \todo[inline,color=lightgray]{Are the symbols consistent? subgraphs/subsets $\checkmark$, isomorphy $\checkmark$, neighbourhood $\checkmark$} \todo[inline,color=lightgray]{$k$-convergent $\to$ clique convergent, same with divergent. $\checkmark$} \todo[inline,color=lightgray]{Use the ``i.\,e.,'' command everywhere $\checkmark$} \todo[inline,color=lightgray]{Which spelling do we use? - \asays{British with Oxford commas}} \todo[inline,color=lightgray]{Do Sections and Theorems have the appropriate prerequisites? - e.\,g.\ locally cyclic, connected, triangularly simply connected} \todo[inline,color=lightgray]{Is information actually where we say it is, for example "in the next section, ..."} \todo[inline,color=lightgray]{Make neighbourhoods sets and always use $\subseteq$, never $\le$ $\checkmark$} \todo[inline,color=lightgray]{Check usage: use for cliques: $Q$ (and $R$); use for triangular-shaped graphs : $S$ and $T$. $\checkmark$} \todo[inline,color=lightgray]{inline fracs to $a/b$ $\checkmark$} \todo[inline,color=lightgray]{paths in simplicial complexes $\to$ (simplicial) walks $\checkmark$} \todo[inline,color=lightgray]{are all definitions bold $\checkmark$} \todo[inline,color=lightgray]{Make path notation consistent: $P_1\ldots P_h$ instead of $P_1,\ldots,P_h$.$\checkmark$} \todo[inline,color=lightgray]{Check consistent use of ... and $\ldots$ $\checkmark$} \todo[inline,color=lightgray]{replace $p:G\to H$ by $p\colon G\to H$ $\checkmark$} \todo[inline,color=lightgray]{Double check whether Thm A and B are phrased optimally} \todo[inline,color=lightgray]{Check whether Thm A and B are the same in both versions.} \todo[inline,color=lightgray]{Check in which places the definition of triangularly simply connected needs to be referenced $\checkmark$} \todo[inline,color=lightgray]{add MSC Classification} \todo[inline,color=lightgray]{Change all titles to use capital letters $\checkmark$} \todo[inline,color=lightgray]{Formating, formulas should not include linebreaks etc} \todo[inline,color=lightgray]{ldots to ...} \todo[inline,color=lightgray]{``walk equivalence'' or ``walk homotopy''?} \todo[inline,color=lightgray]{Replace ``compact surface'' by ``closed surface''} \todo[inline,color=lightgray]{Replace ``cycle'' by ``circle''}
\title{Characterising Clique Convergence for Locally Cyclic Graphs of Minimum Degree $\mathemph{\delta\ge 6}
\begin{abstract} The clique graph $kG$ of a graph $G$ has as its vertices the cliques (maximal complete subgraphs) of $G$, two of which are adjacent in $kG$ if they have non-empty intersection in $G$. We say that $G$ is clique convergent\ if $k^nG\cong k^m G$ for some $n\not= m$, and that $G$ is clique divergent\ otherwise.
We completely characterise the clique convergent\ graphs in the class of (not necessarily finite) locally cyclic graphs of minimum degree $\delta\ge 6$, showing that for such graphs clique divergence is a global phenomenon, dependent on the existence of large substructures. More precisely, we establish that such a graph is clique divergent\ if and only if its universal triangular cover contains arbitrarily large members from the family of so-called ``\triangularshaped\ graphs''.
\end{abstract}
\blfootnote{\textbf{Keywords:} iterated clique graphs, clique divergence, clique dynamics, locally cyclic graphs, hexagonal lattice, triangular graph covers, infinite graphs, triangulated surfaces.} \blfootnote{\textbf{2010 Mathematics Subject Classification:} 05C69, 57Q15, 57M10, 37E25.}
\section{Introduction}
Given a (not necessarily finite) simple graph $G$, a \textbf{clique} $Q\subseteq G$ is an inclusion~maxi\-mal complete subgraph. The \textbf{clique graph} $\boldsymbol{kG}$ has as its vertices the cliques~of~$G$,\nolinebreak\space two~of which are adjacent in $kG$ if they have a non-empty intersection in $G$. The \mbox{operator~$\boldsymbol k$} is~known as the \textbf{clique graph operator} and the behaviour of the sequence~$G$, $kG$, $k^2G,\ldots$ is the \textbf{clique dynamics} of $G$. The graph is \textbf{clique convergent} if the clique dynamics cycles eventually and it is \textbf{clique divergent} otherwise. It is an ongoing~endeavour to understand which graph properties lead to convergence and di\-vergence respectively, however, since clique convergence is known to be undecidable in general \cite{https://doi.org/10.1002/jgt.22622}, this investigation often restricts to certain graph classes, such as graphs of low degree \cite{villarroel2022clique}, circular arc graphs \cite{lin2010clique}, or locally $H$ graphs (e.\,g.\ locally cyclic graphs \cite{larrion2000locally} or shoal graphs \cite{LARRION201686}).
The focus of the present article is on \textbf{locally cyclic} graphs, that is, graphs for which the neighbourhood of each vertex induces a cycle.
Such graphs can be interpreted~as~tri\-angulations of surfaces (always to be understood as ``without boundary''), and it was~recognized early that the study of their clique dyna\-mics can be informed by topological considerations.
So it is known~that each closed surface (i.\,e.,\ compact and without boundary) has a clique divergent triangulation \cite{larrion2006graph}, but that convergent triangulations exist on all closed surfaces of negative Euler characteristic \cite{larrion2003clique}. It has furthermore been conjectured that there are no convergent triangulations on closed surfaces of non-negative Euler characteristic (for a precise statement one requires minimum degree $\delta\ge 4$; see \cref{conj:non_neg_Euler_diverges}). For example, the 4-regular \cite{escalante1973iterierte} and 5-regular \cite{pizana2003icosahedron} triangulations of the sphere (i.\,e.,\ the octahedral and icosahedral graph) are clique divergent; as is any 6-regular triangulation of the torus or Klein bottle \cite{larrion1999clique,larrion2000locally}.\nolinebreak\space On the other hand, a triangulation of minimum degree $\delta\ge 7$ (necessarily of a closed sur\-face of higher genus)
is clique convergent\ \cite{larrion2002whitney}. Triangulations that mix degrees above and below six are still badly understood.
Baumeister \& Limbach \cite{BAUMEISTER2022112873} broadened these investigations to triangulations of non-compact surfaces, that is, to infinite locally cyclic graphs.
They gave an explicit description of $k^n G$ in terms of so-called \textbf{\triangularshaped\ subgraphs} of $G$ (see \cref{Fig_Delta_m}), where $G$ is a triangulation of minimum degree $\delta\ge 6$ of a (not necessarily compact)~simply connected surface
(see \cref{sec:recall_old_paper} for details).
\begin{figure}
\caption{The \triangularshaped\ graphs\ $\Delta_m$ for $m\in \{0,\ldots,4\}$.}
\label{Fig_Delta_m}
\end{figure}
The goal of this article is to bring the investigation of \cite{BAUMEISTER2022112873} to a satisfying conclusion: we apply their explicit construction of $k^n G$ to completely characterise the clique convergent\ triangulations in the class of (not necessarily finite) locally cyclic graphs of minimum degree $\delta\ge 6$.\nolinebreak\space
We thereby answer the open questions from Section 9 of \cite{BAUMEISTER2022112873}.
Our first main result concerns locally cyclic graphs that are \textbf{triangularly simply~connected}, that is, they correspond to triangulations of simply connected surfaces (see~\cref{sec:proof_of_B_covers} for a rigorous definition).
We identify the clique divergence of these graphs as a consequence of the existence of arbitrarily large \triangularshaped\ subgraphs.
\begin{theoremX}{A}[Characterisation theorem for triangularly simply connected graphs]
\label{thm:A}
\hypertarget{thm:A} A triangularly simply connected locally cyclic graph of minimum degree $\delta\geq 6$ is clique divergent\ if and only if it contains arbitrarily large \triangularshaped\ subgraphs. \end{theoremX}
The difficulty in proving \thmA\ lies in establishing divergence for a sequence~of \textit{infinite} graphs.
Divergence is usually shown by observing the~divergence of some~numerical graph parameter, such as the vertex count or graph diameter. As our graphs~are potentially infinite, this fails since the straightforward quantities might be infinite to~begin with. The quest then lies mainly in identifying an often more contrived graph~invar\-iant which is still finite yet unbounded.
As a consequence of \thmA\ we find that the 6-regular triangulation of the Euclidean plane (aka.\ the hexagonal lattice) is clique divergent.
By applying \thmA\ to the universal triangular cover (see \cref{sec:proof_of_B_covers}), we obtain the following more general result.
\begin{theoremX}{B}[General characterisation theorem]
\label{thm:B}
\hypertarget{thm:B} A (not necessarily finite) connected locally cyclic graph of minimum degree $\delta\geq 6$ is clique divergent\ if and only if its universal triangular cover contains arbitrarily large \triangularshaped\ subgraphs. \end{theoremX}
The ``only if'' direction of \thmB\ was supposedly proven in \cite{BAUMEISTER2022112873}, but the proof~contains a gap, which we close in \cref{sec:proof_of_B}.
As a consequence of \thmB, a triangulation of of minimum degree $\delta\ge 6$ of a closed surface is clique divergent if and only if it is 6-regular (cf.\ \cite[Lemma 8.10]{BAUMEISTER2022112873}).
We mention two further recent results on clique dynamics that are in a similar spirit. In 2017, Larrión, Piza\~na, and Villarroel-Flores \cite{larrion2017strong} showed that the clique operator preserves (finite) triangular graph bundles, which are a generalisation of finite triangular covering maps. Also, just recently in 2022, Villarroel-Flores \cite{villarroel2022clique} showed that among the (finite) connected graphs with maximum degree at most four, the octahedral graph is the only one that is clique divergent.
{\color{lightgray}
\iffalse
\hrulefill
The regular spherical cases were studied early: the octahedral and icosahedral graph have a divergent clique dynamics \cite{escalante???,pizana2003icosahedron}.
More formally, a graph is locally cyclic if the neighbourhood of each vertex induces a cycle. The \textbf{triangular complex} $\K G$ of a graph $G$ is the simplicial complex obtained from $G$ by filling each 3-cycle with a 2-simplex. For locally cyclic graphs $\K G$ forms an unbounded surface and the vertex degrees of $G$ dictate the classification of $\K G$ into spherical (degrees $\le 5$), Euclidean (degree $=6$) and hyperbolic (degree $\ge 7$).
Locally cyclic graphs of minimum degree $\delta\ge 4$ can be described as Whitney triangulations of surfaces and where investigated in \cite{larrion2003clique,larrion2002whitney,larrion2006graph}. The two regular locally cyclic graphs of degree $\delta=4$ \todo{cite} (the octahedral graph) and $\delta=5$ \cite{pizana2003icosahedron} (the icosahedral graph) were recognized as clique divergent. In 1999, Larrión and Neumann-Lara showed that some $6$-regular triangulations of the torus are clique divergent \cite{larrion1999clique} and, in 2000, they generalised this result to all $6$-regular locally cyclic graph \cite{larrion2000locally}. Furthermore, Larrión, Neumann-Lara, and Piza\~na \cite{larrion2002whitney} showed that graphs in which every open neighbourhood of a vertex has a girth of at least $7$ are clique convergent. Thus, locally cyclic graphs of minimum degree $\delta$ of at least $7$ are clique convergent.
\hrulefill
Locally cyclic graphs with a minimum degree $\delta$ of at least 4 can be described as Whitney triangulations of surfaces, which were investigated for example in \cite{larrion2003clique}, \cite{larrion2002whitney}, and \cite{larrion2006graph}. In 1999, Larrión and Neumann-Lara showed that some \mbox{$6$-regular} triangulations of the torus are clique divergent\enspace \cite{larrion1999clique} and, in 2000, they generalised this result to every \mbox{$6$-regular} locally cyclic graph \cite{larrion2000locally}. Furthermore, Larrión, Neumann-Lara, and Piza\~na \cite{larrion2002whitney} showed that graphs in which every open neighbourhood of a vertex has a girth of at least $7$ are clique convergent. Thus, locally cyclic graphs of minimum degree $\delta$ of at least $7$ are clique convergent.
In their 2022 paper \cite{BAUMEISTER2022112873}, Baumeister and Limbach explicitly constructed the sequence of iterated clique graphs of a triangularly simply connected locally cyclic graph with minimum degree $\delta\geq 6$. In the same paper, they argued that clique convergence of the universal triangular cover of a locally cyclic graph of minimum degree $\delta\geq 6$ implies the clique convergence of the graph itself. Unfortunately, this part of the paper (precisely the proof of Lemma 8.8) contains a gap, which we close as a part of this paper.
\fi
\iffalse
The clique graph $kG$ of a graph $G$ consists of the maximal complete subgraphs of $G$, called \textbf{cliques}, as its vertex set and the non-trivial intersections of those as its edge set, i.\,e.,\ $Q_1$ and $Q_2$ are adjacent in $kG$ if $Q_1\cap Q_2\neq \emptyset$. The operator $\mathemph k$ mapping $G$ to $kG$ is called the \textbf{clique graph operator}.
We study \textbf{locally cyclic graphs}, which means that the open neighbourhood of a vertex $v$ always induces a cycle.
\fi
}
\subsection{Structure of the Paper}
In \cref{sec:notation_background}, we recall the fundamental concepts and notations used throughout the~paper. In particular, in \cref{sec:recall_old_paper} we recall the geometric clique graph $G_n$ and the relevant statements of \cite{BAUMEISTER2022112873} that established the explicit description of $k^n G$ in terms of $G_n$.
In \cref{sec:proof_of_A}, we prove \thmA. To show that a sequence of infinite graphs~is~divergent, we identify a finite yet unbounded graph invariant $D(H)$ (see \eqref{eq:invariant}) based on the distribution of vertices of degree 26.
In \cref{sec:proof_of_B}, we prove \thmB. We extend the divergence results of \cref{sec:proof_of_A} to graphs that are not necessarily triangularly simply connected by exploiting that covering relations interact well with the clique operator and the geometric clique graph.
\cref{sec:conclusions} summarizes the results and lists related open questions.
We also include an appendix which recalls helpful background theory for \cref{sec:proof_of_B}. \cref{appendix_a} gives a proof that triangular simple connectivity is preserved under the clique operator while \cref{appendix_b} focuses on the existence and uniqueness of a triangularly simply connected triangular cover for any connected graph.
\section{Notation and Background} \label{sec:notation_background}
\subsection{Basic Notation}
All graphs in this article are simple, non-empty and potentially infinite. If not stated otherwise, they are connected and locally finite. For a graph $G$ we write $V(G)$ and $E(G)$ to denote its vertex set and edge set, respectively. The adjacency relation is denoted by $\sim$.
We define the closed and the open neighbourhood of a set $U\subseteq V(G)$ of vertices as \begin{align*} N_G[U]&\coloneqq \{v\in V(G)\mid \text{$v\in U$ or $v\sim w$ for some $w\in S$}\}\text{ and}\\ N_G(U)&\coloneqq \{v\in V(G)\mid \text{$v\not\in U$ and $v\sim w$ for some $w\in S$}\}, \end{align*} respectively.
For $v\in V(G)$, we write $N_G[v]$ instead of $N_G[\{v\}]$ and and $N_G(v)$ instead of $N_G(\{v\})$.
We write $\deg_G(v)\coloneqq|N_G(v)|$ for the degree of $v$, and $\dist_G(v,w)$ for the graph-theoretic distance between two vertices $v,w\in V(G)$. For $v\in V(G)$ and $U,U'\subseteq V(G)$ we write
$$\dist_G(v,U)\coloneqq \min_{w\in U} \dist_G(v,w)\quad \text{and}\quad \dist_G(U,U')\coloneqq \min_{\mathclap{v\in U,w\in U'}}\dist_G(v,w).$$
We write $G$-degree, $G$-neighbourhood, or $G$-distance to emphasize the graph with respect to which these quantities are computed. Finally, we use $\cong$ to denote isomorphy between graphs.
We write $\menge{N}\coloneqq \{1,2,3,\ldots\}$ and $\menge{N}_0\coloneqq \menge{N}\cup\{0\}$ for the sets of natural numbers without and with zero. We write $k\menge{N}$ and $k\menge{N}_0$ to denote multiples of $k$.
\subsection{Cliques, Clique Graphs, and Clique Dynamics}
A \textbf{clique} in $G$ is an inclusion maximal complete subgraph. The \textbf{clique graph} $kG$ has vertex set $V(kG)\coloneqq \{\text{cliques of $G$}\}$, and distinct cliques $Q,Q'\in V(kG)$ are adjacent in $kG$ if they have vertices in common. We consider $k$ as an operator, the \textbf{clique graph operator}, mapping a graph to its clique graph. By $k^n$, we denote its $n$-th iterate.
A sequence $G^0,G^1,G^2,\ldots$ of graphs is said to be \textbf{convergent} if it is eventually periodic, that is, if for some $r\in\menge{N}$ and all sufficiently large $n\in\menge{N}$ we have $G^n\cong G^{n+r}$.
The sequence is said to be \textbf{di\-vergent} otherwise. A graph $G$ is said to be \textbf{clique convergent} if the sequence $k^0 G,$ $k^1 G,$ $k^2 G,\ldots$ is convergent, and is called \textbf{clique divergent}~otherwise.
\subsection{Locally Cyclic Graphs, Triangular-Shaped Subgraphs, and the Geometric Clique Graph} \label{Sect_Hexagonal} \label{sec:recall_old_paper}
A graph $G$ is \textbf{locally cyclic} if the (open) neighbourhood of each vertex induces a cycle. In particular, a locally cyclic graph is locally finite. Such graphs can also be interpreted as triangulations of surfaces. We shall however use this geometric perspective~only~informally, and work with the purely graph theoretic definition given above. A fundamental example of a locally cyclic graph is the hexagonal triangulation of the Euclidean plane.
We use the class of \textbf{triangular-shaped graphs} $\mathemph {\Delta_m}$ from \cite{BAUMEISTER2022112873}, which are subgraphs of the hexagonal lattice, and the smallest five of which are depicted in \cref{Fig_Delta_m}. The parameter $m$ is called the \textbf{side length} of $\Delta_m$, and the boundary $\mathemph{\partial \Delta_m}$ is the subgraph of $\Delta_m$ that consists of the vertices of degree less than six and the edges that lie in only a single triangle (\cref{fig:pyramid_boundary}).
\begin{figure}
\caption{The \triangularshaped\ graph\ $\Delta_4$ and its boundary $\partial\Delta_4$.}
\label{fig:pyramid_boundary}
\end{figure}
In \cite{BAUMEISTER2022112873}, it was shown that the $n$-th iterated clique graph $k^n G$ of a triangularly simply connected locally cyclic graph $G$ of minimum degree $\delta\geq 6$ (also called ``pika'' in \cite{BAUMEISTER2022112873}) can be explicitly constructed based on \triangularshaped\ subgraphs\ of $G$ (see \cref{Def_theCliqueGraph} and \cref{res:structure_theorem} below). Hereby ``triangularly simply connected'' means ``triangulation of a simply connected surface'', but a precise definition is postponed until \cref{sec:universal_covers} (or see \cite{BAUMEISTER2022112873}). For now it suffices to use this terms as a black box, merely to apply \cref{res:structure_theorem}. Note however that such a graph is in particular connected.
The explicit construction of $k^n G$ is captured by the following definition:
\begin{defi}[{\cite[Definition 4.1]{BAUMEISTER2022112873}}]\label{Def_theCliqueGraph}
Given a triangularly simply connected locally cyclic graph $G$ of minimum degree $\delta \ge 6$, its
\textbf{$\mathemph{n}$-th geometric clique graph} $\mathemph{G_n}$ ($n\ge 0$) has the following form:
\begin{enumerate}[label=(\roman*)]
\item the vertices of $G_n$ are the \triangularshaped\ subgraphs\ of $G$ of side length $m\le n$ with $m\equiv n\pmod 2$.
\item two distinct \triangularshaped\ subgraphs\ $S_1\cong \Delta_m$ and $S_2\cong \Delta_{m+s}$ with $s\ge 0$ are adjacent in $G_n$ if and only if any of the following applies:
\begin{enumerate}[label=\alph*.]
\item $s=0$ and $S_1 \subset \neig{G}{S_2}$ (or equivalently, $S_2 \subset \neig{G}{S_1}$).
\item $s=2$ and $S_1 \subset S_2$.
\item $s=4$ and $S_1\subset S_2\setminus\partial S_2$.
\item $s=6$ and $S_1=S_2\setminus N_G[\partial S_2]$.
\end{enumerate}
\end{enumerate} \end{defi}
Note that $G_0\cong G$. We then have
\begin{theo}[{\cite[Theorem 6.8 + Corallary 7.8]{BAUMEISTER2022112873}}] \label{res:structure_theorem} If $G$ is locally cyclic, triangularly simply connected and of minimum degree $\delta\ge 6$, then $G_n\cong k^n G$ for all $n\in\menge{N}_0$. \end{theo}
We refer to the four types of adjacencies listed in \cref{Def_theCliqueGraph} as adjacencies of~type $\pm0,\pm2,\pm4$ and $\pm 6$ respectively. For a \triangularshaped\ graph\ $S\in V(G_n)$ of side length $m$, we refer to a neighbour $T\in N_{G_n}(S)$ of side length $m+s$ as being of type $s\in\{-6,$ $-4,-2,\pm 0,+2,+4,+6\}$.
Some visualisations for the various configurations of \triangularshaped\ graphs\ that correspond to adjacency in $G_n$ can be seen in \cref{fig:large_enough_cases,fig:small_exceptions,fig:m0_cases} in the next section.
The following example demonstrates how \cref{res:structure_theorem} can be used to establish clique convergence in non-trivial cases:
\begin{ex} A locally cyclic and triangularly simply connected graph $G$ of minimum degree $\delta\ge 7$ does not contain any \triangularshaped\ graphs\ of side length $\ge 3$ (because such have vertices of degree six).
Hence, $k^n G\cong G_n= G_{n+2}\cong k^{n+2} G$ whenever $n\ge 1$. Such a graph $G$ is therefore clique convergent. \end{ex}
\iffalse
\begin{rem} \label{rem:aligned_twisted} For inclusions between two \triangularshaped\ graphs\ we can further
distinguish between \textbf{aligned} and \textbf{twisted} inclusion. This distinction is best understood informally as visualized in \cref{fig:twisted_vs_aligned}, but a formal definition can be given as follows: if $S\subseteq S'$ are triangular-shaped graphs of size $m$ and $m'$ respectively, we say that $S$ is aligned in $S'$ if there is an inclusion chain $S=S_1\subsetneq\cdots\subsetneq S_r=S'$ of triangular-shaped graphs $S_i\cong\Delta_{m_i}$ with $m_{i+1} = m_i+1$, and if $m_i=1$, then $S_i$ shares a boundary edge with $S_{i+1}$. Otherwise, we say that $S$ is twisted in $S'$. \end{rem}
\begin{figure}
\caption{A triangular-shaped subgraph in a triangular-shaped graph can be either aligned (left) or twisted (right).}
\label{fig:twisted_vs_aligned}
\end{figure}
\todo{Say that this was done in the first paper with the maps.}
\fi
\section{Proof of Theorem A} \label{sec:proof_of_A}
Throughout this section, we assume that $G$ is a locally cyclic graph that is triangularly simply connected and has minimum degree $\delta\geq 6$.
We can then apply \cref{res:structure_theorem} and investigate the dynamics of the sequence of geometric clique graphs $G_n$ in place of $k^n G$.
One direction of \thmA\ follows immediately from the definition of the geometric clique graph (\cref{Def_theCliqueGraph}): if all \triangularshaped\ subgraphs\ of $G$ are of side length $\le m\in 2\menge{N}$, then $G_m\cong G_{m+2}$, and the sequence cycles.
The remainder of this section is devoted to proving the other direction of \thmA: if $G$ contains arbitrarily large \triangularshaped\ subgraphs, then $G$ is clique divergent. For this, we identify a graph invariant that is both finite and unbounded for the sequence $G_n$ as $n\to\infty$, as long as $G$ contains arbitrarily large \triangularshaped\ subgraphs.
It turns out that a suitable graph invariant can be built from measuring distances between vertices of certain degrees. Curiously, the degree 26 plays a special role, and the following notation comes in handy:
\begin{align*} \mathemph{\textsc{\textbf{deg}}_{26}(H)}\coloneqq \{v\in V(H)\mid \deg_H(v)=26\}\\ \mathemph{\overline{\textsc{\textbf{deg}}}_{26}(H)}\coloneqq \{v\in V(H)\mid \deg_H(v)\not=26\} \end{align*}
The corresponding graph invariant is the following:
\begin{equation}
\label{eq:invariant} \mathemph{D(H)}\coloneqq \max_{\mathclap{\substack{\\v\in V(H)}}}\, \dist_{H}\!\big(v,\irr{H}\big).
\end{equation}
The significance of the number 26 stems from the observation that most vertices of $G_n$ have $G_n$-degree $\le 26$; and have $G_n$-degree \textit{exactly} 26 only in very special circumstances that can be expressed as the existence of certain \triangularshaped\ subgraphs\ in $G$. This is proven in \cref{res:m_ge_6_degrees_26_iff} and \cref{res:m_eq_0_degrees_26_if}. Finitude and divergence of $D(G_n)$ as $n\to\infty$ are proven afterwards in \cref{lem_distanceupperbound} and \cref{lem_distancelowerbound}.
In the following, we generally consider $G_n$ only for even $n\in 2\menge{N}$, as this cuts down on the cases we need to investigate, and is still sufficient to show that $D(G_n)$ is unbounded. Note that each $S\in V(G_n)$ is then of even side length $m\in\{0,2,4,6,\ldots\}$.
\begin{lem} \label{res:m_ge_6_degrees_26_iff} Let $S\in V(G_n)$ be a \triangularshaped\ graph\ of side length $m\ge 6$. Then $\deg_{G_n}(S)\le 26$, with equality if and only if $S$ has a neighbour of type $+6$. \end{lem}
\Cref{res:m_ge_6_degrees_26_iff} actually holds unchanged for $m\ge 2$. Since we do not need these cases to prove \thmA, and since verifying them requires a distinct case analysis (because of ``twisted adjacencies'', cf.\ \cref{fig:small_exceptions}), we do not include them here.
\begin{proof}[Proof of \cref{res:m_ge_6_degrees_26_iff}]
\Cref{fig:large_enough_cases} shows all potential configurations of $S$ and a $G_n$-neighbour of $S$ according to \cref{Def_theCliqueGraph} (here we need $m\ge 6$, as there are exceptional ``twisted adjacencies'' for smaller $m$, see \cref{fig:small_exceptions}). In total this amounts to a degree of at most 26. In particular, if just one of the neighbours is missing, say the neighbour of type $+6$, then $S$ must have a $G_n$-degree of less than 26.
Conversely, one can verify that if $S$ has a neighbour of type $+6$, say $T\in N_{G_n}(S)$, then all other neighbours of types $-6,-4,-2,0,+2$, and $+4$ can be found as subgraphs of $T$. Therefore, all 26 neighbours are present and the degree is 26. \end{proof}
\begin{figure}
\caption{The 26 possible ways in which a \triangularshaped\ graph\ $S\in V(G_n)$ of side length $m\ge 6$ can be $G_n$-adjacent to another \triangularshaped\ graph\ $T\in V(G_n)$ of side length $m+s$, where $s\in\{-6,-4,-2,0,+2,+4,+6\}$. Two configurations may differ merely by a symmetry (one of the six ``reflections'' and ``rotations'' of a \triangularshaped\ graph), and we always show only a single configuration with the multiplication factor next to it indicating the number of equivalent configuration related by symmetry. Note that for the types $\pm2$, $\pm4$ and $\pm6$, the configurations must be accounted for twice in the $G_n$-degree of $S$: once with $S$ being the larger graph (in grey), and once with $S$ being the smaller graph (in black). Then $26=6+2\cdot(3+3+3+1)$.}
\label{fig:large_enough_cases}
\end{figure}
\begin{figure}
\caption{For $m\in\{2,4\}$ also exist the following ``twisted adjacencies''.}
\label{fig:small_exceptions}
\end{figure}
For $m=0$ only one direction holds, which is also sufficient for our purpose.
\begin{lem} \label{res:m_eq_0_degrees_26_if} Let $n\in 2\menge{N}$ and $s\in V(G_n)$ be a \triangularshaped\ graph\ of side length $m=0$ (that is, $s$ is a vertex of $G$). If $s$ has no $G_n$-neighbour of type $+6$, then $\deg_{G_n}(s)\not=26$. \end{lem}
\begin{proof} Clearly, $s$ has no neighbours of type $-6,-4$ or $-2$. The $G_n$-neighbours of type $0$ are exactly the vertices that are also adjacent to $s$ in $G$, that is, there are \textit{exactly} $\deg_G(s)$ many. The potential neighbours of type $+4$ and $+6$ are shown in \cref{fig:m0_cases},\nolinebreak\space which~amount to \textit{at most} eight neighbours of these types. Note that these can exist only if $\deg_G(s)=6$.
\begin{figure}
\caption{The eight possible neighbours of a \triangularshaped\ graph\ of side length $m=0$ of type $+4$ and $+6$. See the caption of \cref{fig:large_enough_cases} for an explanation of the multiplicities.}
\label{fig:m0_cases}
\end{figure}
It remains to count the neighbours of type $+2$, which will turn out at \textit{exactly} $2\deg_G(s)$, independent of the specifics of $G$. Observe first that there can be two types of neighbours $T\in N_{G_n}(s)$ of type $+2$ distinguished by the $T$-degree of $s$, which is either two or four (cf.\ \cref{fig:m0_2_cases}). We shall say that these neighbours are of type $+2_2$ and $+2_4$ respectively.
In the following, an \textit{$r$-chain}
is an inclusion chain $s\subset \Delta\subset T$, where $\Delta$ is an $s$-incident triangle in $G$, and $T$ is a neighbour of $s$ of type $+2_r$. The following information can be read from \cref{fig:m0_2_cases}: a neighbour of $s$ of type $+2_r$ can be extended to an $r$-chain in exactly $n_r$ ways (where $n_2=1$ and $n_4=3$). Likewise, an $s$-incident triangle can be extended to an $r$-chain in exactly $n_r$ ways as well. By double counting, we find that $1/n_r$ times the number of $r$-chains equals both the number of $s$-incident triangles (which is exactly $\deg_G(s)$) and the number of neighbours of $s$ of type $+2_r$. In conclusion, the number of neighbours of $s$ of type $+2$ is \textit{exactly} $2\deg_G(s)$.
\begin{figure}
\caption{Row $+2_r$ shows the ways in which an inclusion $s\subset T$ (left; $T$ being a $G_n$-neighbour of $s$ of type $+2_r$) or an inclusion $s\subset \Delta$ (right; $\Delta$ being an triangle in $G$) extends to an $r$-chain in $n_r=r-1$ ways. }
\label{fig:m0_2_cases}
\end{figure}
Taking together all of the above, we count
$$\deg_{G_S}(s) \begin{cases}
= \deg_G(s) + 2\deg_G(s) = 3\deg_G(s) & \text{if $\deg_G(s)\not=6$} \\
\le 6+2\cdot 6+8=26 & \text{if $\deg_G(s)=6$} \end{cases}.$$
Since $26\not\equiv 0\pmod3$, if $\deg_G(s)\not=6$ we obtain $\deg_{G_n}(s)\not=26$ right away. If $\deg_G(s)=6$ and if there is no $G_n$-neighbour of type $+6$, then the maximal amount of 26 neighbours cannot have been attained, and $\deg_{G_n}(s)\not=26$ as well. \end{proof}
\iffalse
\subsection{Degrees in the geometric clique graph} \label{sec:proof_of_A_degrees}
Throughout this subsection fix $n\in 2\menge{N}$ and $S\in G_n$ with $S\cong \Delta_m$. Recall that~a~neighbour $T\in N_{G_n}(S)$ is of the form $T\cong \Delta_{m+s}$ for some $s\in\{-6,-4,-2,\pm 0,+2,+4,$ $+6\}$.
The imme\-diate goal is to verify \Cref{tab:degrees_in_Gn}, which gives exact values and upper bounds for the number of such neighbours of $S$ for a given combination of $m$ and $s$.
\begin{table}[ht!] \centering
\begin{tabular}{c||c|c|c|c} $s$ & $m=0$ & $m=2$ & $m=4$ & $m\ge 6$ \\ \hline \hline $\overset{\phantom.}{-6}$ & & & & 1 \\ \cline{4-4} $-4$ & & & 3 & 3 \\ \cline{3-3} $-2$ & & 6 & 7 & 6 \\ \cline{2-2} $\phantom+ 0$ & $\deg$ & $\le 9$ & $\le 6$ & $\le 6$ \\ $+2$ & $2\deg$ & $\le 7$ & $\le 6$ & $\le 6$ \\ $+4$ & $6^* \,/\, 0^{**}$ & $\le 3$ & $\le 3$ & $\le 3$ \\ $+6$ & $2^* \,/\, 0^{**}$ & $\le 1$ & $\le 1$ & $\le 1$ \\[0.3ex] \hline \hline $\overset{\phantom.}{\text{sum}}$ & $26^*$ & $\le 26$ & $\le 26$ & $\le 26$ \\ & $3\deg^{**}$ & & & \end{tabular}
\noindent {\small ${}^{\phantom**}$ only if $\deg=6$ \qquad\qquad\qquad\qquad\qquad\qquad \qquad
${}^{**}$ only if $\deg>6$ \qquad\qquad\qquad\qquad\qquad\qquad }
\caption{ The cell in column ``$m=\cdot$'' and row ``$s$'' shows an exact value or an upper bound for the number of neighbours of $S\in G_n,S\cong \Delta_m$ that are of the form $T\cong\Delta_{m+s}$. If $m=0$ then $S$ is a vertex in $G$ and ``$\deg$'' refers to its \textit{degree in $G$}. The last row shows the sum of the preceding rows, hence an exact value or upper bound for the degree of $S$. } \label{tab:degrees_in_Gn} \end{table}
Inspecting the last row of \cref{tab:degrees_in_Gn} reveals that 26 is an upper bound on the $G_n$-degree of $S$ independent of $m$. The relevant information to be carried over to the next section is when this bound is attained with equality. We summarize this in the following remark:
\begin{rem}\quad \label{rem:degrees} \begin{enumerate}[label=(\roman*)]
\label{rem:chartwentysix}
\item if $m=0$ and $\deg_G(S)>6$, then the degree of $S$ in $ G_n$ is \textit{exactly} $3\deg$, in particular, is divisible by three and therefore $\not=26$.
\item if $m=0$ and $\deg_G(S)=6$, then the degree of $S$ in $G_n$ is exactly $26$.
\item if $S$ has a neighbour $T$ of type $+6$, then any other neighbour of $S$ is contained in $T$, in particular, all possible neighbours of all types exist, and the $G_n$-degree of $S$ is $26$.
\end{enumerate} \end{rem}
In the remainder of this section we verify \cref{tab:degrees_in_Gn}. We found that the most direct~way to do this is by visualisation: the following subsections contain depictions of the various types of adjacencies in $G_n$, which are easily verified to be exhaustive, and which then allow verification of \cref{tab:degrees_in_Gn}.
\todo{don't use "we" to refer to the authors\msays{What else?}}
\subsubsection{General remarks to the figures} \label{ssec:figure_remarks}
Verification of \cref{tab:degrees_in_Gn} is split into three cases associated to three figures: the ``downwards adjacencies'' ($s<0$) in \cref{fig:pyramid_neighbors_strictly_downwards}, the ``same level adjacencies'' ($s=0$) in \cref{fig:pyramid_neighbors_same_level}, and the ``upwards adjacencies'' ($s>0$) in \cref{fig:pyramid_neighbors_strictly_upwards}.
For visualisation reasons, in each case the larger triangular-shaped graph is depicted in grey and the smaller one in black.
In \cref{fig:pyramid_neighbors_strictly_downwards} and \cref{fig:pyramid_neighbors_same_level}, the grey triangular-shaped graphs depict $S$, whereas the black ones depict $T$.
In \cref{fig:pyramid_neighbors_strictly_upwards}, the black triangular-shaped graphs depicts $S$ whereas the grey ones depict $T$. If $m=0$ then $S$ is depicted as a single point with incident edges. Likewise, if $m=-s$ and thus $T\cong \Delta_0$, then $T$ is depicted as a single point, but without incident edges.
Often there are several adjacencies that are related by an obvious symmetry (``rotation'' or ``reflection'' of a triangular-shaped graph). We depict only one of them and~denote the number of related configurations next to it, e.\,g.\ $\times 1$ (if it is unique) or $\times 3$.
Note that we depict \textit{potential} configurations; we do not claim that they are actually all present in $G_n$ (except for $m=0$, $s\in\{0,+2\}$, see \cref{ssec:same_level_adjacencies} and \cref{ssec:upwards_adjacencies}).
\subsubsection{Adjacencies for $\boldsymbol{s\in\{-6,-4,-2\}}$} \label{ssec:downwards_adjacencies}
\begin{figure}
\caption{ $S$ shown in grey, and its potential neighbour $T$ show in black. The distinction in the case $m\ge 6$ and $s=-6$ is for visualisation only. }
\label{fig:pyramid_neighbors_strictly_downwards}
\end{figure}
The reader can verify that the configurations shown in \cref{fig:pyramid_neighbors_strictly_downwards} are exhaustive and that their number agrees with the respective entries in \cref{tab:degrees_in_Gn}.
\subsubsection{Adjacencies for $\boldsymbol{s=0}$} \label{ssec:same_level_adjacencies}
\begin{figure}
\caption{ $S$ shown in grey, and its potential neighbour $T$ show in black. }
\label{fig:pyramid_neighbors_same_level}
\end{figure}
Again, the reader can verify that the configurations shown in \cref{fig:pyramid_neighbors_strictly_downwards} are exhaustive and that their number agrees with the respective entries in \cref{tab:degrees_in_Gn}.
For $m=s=0$ both $S$ and $T$ are vertices in $G$, and $S$ and $T$ are adjacent in $G_n$ if and only if they are adjacent in $G$. This case therefore comes down to enumerating the neighbours of $S$ in $G$ and it yields an exact number rather than just an upper bound.
\subsubsection{Adjacencies for $\boldsymbol{s\in\{+2,+4,+6\}}$} \label{ssec:upwards_adjacencies}
\begin{figure}
\caption{ $S\cong\Delta_m$ shown in black, and each potential $G_n$-neighbour $T\cong\Delta_{m+s}$ show in grey for the upwards adjacencies $s\in\{+2,+4,+6\}$. }
\label{fig:pyramid_neighbors_strictly_upwards}
\end{figure}
Once again, the reader can verify that the configurations shown in \cref{fig:pyramid_neighbors_strictly_downwards} are exhaustive and that their number agrees with the respective entries in \cref{tab:degrees_in_Gn}.
For $m=0$ and $s=+2$, the reader should verify that for each vertex of $G$, there are indeed triangular-shaped graphs $\cong\Delta_2$ as shown in \cref{fig:pyramid_neighbors_strictly_upwards} (recall that the minimum degree of $G$ is $\ge 6$), and that there are exactly as many copies of $\Delta_2$ as the vertex has neighbours This case too therefore does not only yield an upper bound, but an exact number.
Below (in \cref{fig:hyperbolic_7}) we exemplary show their existence in the 7-regular triangulation.
Note further that the cases $m=0$, $s\in\{+4,+6\}$ can only happen if the degree of $S$ in $G$ is \textit{exactly} six, because then $S$ is an inner vertex of the triangular-shaped graph $T$.\todo{For $m\geq 1$, if all neighbours exist, they are subgraphs of the largest neighbour (I need this for \cref{lem_distancelowerbound})}
\begin{figure}
\caption{ The two types of triangular-shaped graphs $\cong\Delta_2$ that exist around every vertex of $G$. }
\label{fig:hyperbolic_7}
\end{figure}
\fi
It remains to show that if $G$ contains arbitrarily large \triangularshaped\ subgraphs, then the graph invariant $D(G_n)$ is both finite and unbounded as $n\to\infty$.\nolinebreak\space We first prove finitude of $D(G_n)$ if $n\in 2\menge{N}$ (in particular, $n\ge 2$, as $D(G_0)=D(G)$ might be infinite).
\begin{lem}\label{lem_distanceupperbound}
If $n\in 2\menge{N}$, then each $S\in V(G_n)$ has a distance to $\irr{G_n}$ of at most $n/6+1$. That is, $D(G_n)\le n/6+1$. \end{lem}
\begin{proof} Suppose $S\cong \Delta_m$ with $m\in 2\menge{N}$. We distinguish two cases.
\ul{Case 1:} there is a $T\in V(G_n)$ of side length $\mu\ge 6$ and $\dist_{G_n}(S,T)\le 2$. We~then fix a maximally long path $T_0T_1\dots T_\ell$ in $G_n$ with $T_0\coloneqq T$ and $T_i\cong \Delta_{\mu+6i}$ (i.\,e.,\ $T_i$~and $T_{i+1}$ are adjacent of type $\pm 6$; see \cref{fig:increasing_pyramids}).
Since the path is maximal, $T_\ell$ has no~$G_n$-neighbour of type $+6$, and since $T_\ell$ is of side length $\mu+6\ell\ge \mu \ge 6$, we have~$T_\ell\in$ $\irr{G_n}$ by \cref{res:m_ge_6_degrees_26_iff}.
As a vertex of $G_n$, $T_\ell$ is of side length at most $n$, and hence $\mu+6\ell\leq n\Longrightarrow \ell\leq n/6-\mu/6\le n/6-1$.
We conclude
\begin{align*}
\dist_{G_n}(S,\irr{G_n}) &\le \dist_{G_n}(S,T)+\dist_{G_n}(T,\irr{G_n}) \\ &\le 2+ (n/6 -1) =n/6+1.
\end{align*}
\begin{figure}
\caption{Initial segment $T_0T_1T_2\ldots$ of an increasing path of \triangularshaped\ subgraphs\ of $G$ where $T_i$ and $T_{i+1}$ are adjacent of type $\pm 6$.}
\label{fig:increasing_pyramids}
\end{figure}
\ul{Case 2:} there is \textit{no} $T\in V(G_n)$ of side length $\mu\ge 6$ and $\dist_{G_n}(S,T)\le 2$. Then we can conclude two things: first, $m< 6$ (otherwise, choose~$T\coloneqq S$) and so there is an $s\in N_{G_n}(S)$ of side length zero. Second, $s$ has no neighbour of~type $+6$ (otherwise, set $T$ to be this neighbour). But then $s$ cannot have degree~26~by~\cref{res:m_eq_0_degrees_26_if}, and therefore $$\dist_{G_n}(S,\irr{G_n})\le\dist_{G_n}(S,s)= 1\le n/6+1.$$ \end{proof}
Finally, we show that $D(G_n)$ is unbounded as $n\to\infty$, assuming that there are~arbitrarily large \triangularshaped\ subgraphs\ of $G$.
\begin{lem}\label{lem_distancelowerbound}
If $G$ contains a \triangularshaped\ subgraph\ of side length $n\in 48\menge{N}$, then there exists an $S'\in V(G_n)$ with distance to $\irr{G_n}$ of more than $n/48$.
That is,\nolinebreak\space $D(G_n)> n/48$.
\end{lem}
\begin{proof}
Choose a \triangularshaped\ graph\ $S\in V(G_n)$ of side length $n\in 48\menge{N}$.
Roughly, the idea is to define a set $\mids S\subseteq \degr{G_n}$ that contains ``deep vertices'', i.\,e.,\ vertices that have no ``short'' $G_n$-paths that lead out of $\mids S$.
We claim that the following set has all the necessary properties:
$$
\mids{S} \coloneqq \Bigg\{
T\in V(G_n)\;\Bigg\vert
\begin{array}{l}
T\subseteq S, \\
\text{$T$ has side length $m\ge 6$ and} \\
\dist_G(T,\partial S)\ge 4
\end{array}
\Bigg\}.
$$
The following observation will be used repeatedly and we shall abbreviate it by $(*)$:\nolinebreak\space if $T\in V(G_n)$ is of side length $m\ge 6$ (e.\,g.\ if $T\in \mids{S}$) and if $T'\in N_{G_n}(T)$ is some $G_n$-neighbour, then $\dist_G(T,v)\le 4$ for all $v\in T'$.
This can be verified by considering the configurations shown in \cref{fig:large_enough_cases}. The bound $\le 4$ is best possible as seen in \cref{fig:dist_4}.
\begin{figure}
\caption{The ``corner vertex'' $v$ of $T\in V(G_n)$ (light grey) has $G$-distance four to the neighbour $T'\in N_{G_n}(T)$ of type $-6$ (dark grey).}
\label{fig:dist_4}
\end{figure}
We first verify $\mids{S}\subseteq \degr{G_n}$.
Fix $T\in \mids{S}$ and consider an embedding of $S$ into the hexagonal lattice.
In this embedding, $T\subseteq S$ has a neighbour $T'$ of type $+6$ that, for all we know, might partially lie outside of $S$; though we now show that actually $T'\subseteq S$:
in fact, for all $v\in V(T')$ holds
$$\dist_G(v,\partial S)\ge\dist_G(T,\partial S)-\dist_G(T,v) \ge 4-4 = 0,$$
where we used both $(*)$ and $T\in\mids{S}$ in the second inequality.
Thus $T'\subseteq S$ and $T'$ also exists in $G$.
Note that this argument shows that all $G_n$-neighbours of $T$ are contained in $S$.
We denote the latter fact by $(**)$ as we reuse it below.
For now we conclude that since $T$ has a $G_n$-neighbour of type $+6$, we have $T\in\degr{G_n}$ by \cref{res:m_ge_6_degrees_26_iff}.
Next we identify a ``deep vertex'' in $\mids S$, that is, a vertex with distance to $V(G_n)\setminus \mids S$ of more than $n/48$.
We claim that we can choose for this the ``central'' \triangularshaped\ subgraph\ $S'\cong \Delta_{n/2}$.
By that we mean the \triangularshaped\ graph\ obtained~from $S$ by repeatedly deleting the boundary $n/6$ times.
The resulting \triangularshaped\ subgraph\ has side length $n/2$ and $\dist_G(S',\partial S)=n/6$.
Since $n\ge 48$, we have both $m_0\coloneqq n/2\ge 6$ and $\dist_G(S',\partial S)=n/6\ge 4$, and therefore $S'\in\mids{S}$.
It remains to show that we have $\ell\coloneqq \dist_{G_n}(S',V(G_n)\setminus \mids{S})> n/48$.
Let $S_0'\ldots S_\ell'$ be a path in $G_n$ from $S_0'\coloneqq S'$ to some $S'_\ell\not\in \mids{S}$.
Let $m_i\in \menge{N}_0$ be the side length of $S'_i$.
Since $S_{\ell-1}'\in \mids S$, by $(**)$ we have $S_\ell'\subseteq S$.
Thus, for $S_\ell'$ to be not in $\mids S$, only two reasons are left, and we verify that either implies $\ell> n/48$:
\begin{itemize}
\item \ul{Case 1:} $m_\ell<6$.
Since $S'_{i-1}$ and $S'_i$ are adjacent in $G_n$ they can differ in side~length by at most six (via an adjacency of type $\pm 6$). That is, $m_{i-1}-m_i\le 6$, and thus
$$6\ell\ge m_0-m_\ell> n/2-6 \;\implies\; \ell> n/12-1\ge n/48.$$
\item \ul{Case 2:} $\dist_G(S'_\ell,\partial S) < 4$. Note first that for all $i\in\{1,\ldots,\ell\}$ holds
$$ \dist_G(S'_{i-1},\partial S)- \dist_G(S'_{i},\partial S) \le \dist_G(S'_{i-1},S'_{i}) \overset{\smash{(*)}}\le 4. $$
It then follows
$$4\ell \ge \dist_G(S'_0,\partial S)-\dist_G(S'_\ell,\partial S)> n/6-4 \;\implies\; \ell> n/24-1\ge n/48.$$
\end{itemize}
In both cases, the right-most inequality was obtained using $n\ge 48$. \end{proof}
\iffalse
\hrulefill
\begin{lem}\label{lem_distancelowerbound}
If $G$ contains a \triangularshaped\ subgraph\ of side length $n\in 24\menge{N}$, then there exists $S'\in V(G_n)$ whose distance to $\irr{G_n}$ is at least $n/24$. \end{lem}
\begin{proof}
Fix $S\in V(G_n)$ with $S\cong \Delta_n$.
The idea is to identify a set $\mids S\subseteq \degr{G_n}$~that contains ``deep vertices'', i.\,e.,\ vertices with no short paths in $G_n$ that lead out of $\mids S$.
We claim that the following set has this property:
$$
\mids{S}\coloneqq \left\{ T\in V(G_n)\
\middle\vert
\begin{array}{l}
\text{$T\cong \Delta_k$ for some $k\in\{6,\ldots,n-6\}$}\\
\text{$T\subseteq S\setminus N_G[\partial S]$}\\
\text{$T$ is aligned with $S$ (cf. \cref{rem:aligned_twisted})}
\end{array} \right\}.
$$
\todo{I suspect we can avoid talking about aligned/twisted if we sacrifice the $n/24$ for a smaller number.}
First we verify $\mids{S}\subseteq \degr{G_n}$.
Each $T\in \mids{S}$ lies in $S\setminus \neig{G}{\partial S}$ and is~aligned with $S$, and thus there is enough space in $S$ around $T\cong\Delta_k$ to find a neighbour $T'\cong\Delta_{k+6}$ of type $+6$.
By \cref{rem:chartwentysix} (iii) this suffices to have $T$ of maximal possible degree in $G_n$, i.\,e.,\ $\deg_{G_n}(T)=26$.
Next we identify a ``deep vertex'' in $\mids S$, which has a distance $\ge n/24$ to $V(G_n)\setminus \mids S$.
We claim that a central \triangularshaped\ subgraph\ $S'\cong \Delta_{n/2}$ is suitable. \TODO
Next, let $S'\cong \Delta_{n/2}$ be the central triangular-shaped aligned subgraph of $S$, that is, there is a path $S_0S_1\ldots S_h$ from $S_0\coloneqq S'$ to $S_h\coloneqq S$ with $S_i\cong\Delta_{n/2+6i}$.
Since~$n\in 24\menge{N}$ we have $n/2\in\{6,\ldots,n-6\}$.
Since $n/6\geq 4$, we have $h\ge 2$ and $S'\subset S_1\subseteq S_{h-1}= S\setminus N_G[\partial S]$.
All in all we found $S'\in \mids{S}$.
It remains to show that $\ell\coloneqq \dist_{G_n}(S',V(G_n)\setminus \mids{S})\geq n/24$.
Let $T_0\ldots T_\ell$ be a shortest path in $G_n$ from $T_0\coloneqq S'$ to some $T_\ell\notin \mids{S}$ and assume $ T_i\cong \Delta_{k_i}$.
In particular, $k_0=n/2$.
There can be three reasons for $T_\ell$ to be not in $\mids S$, and we verify that each one is either impossible, or implies $\ell\ge n/24$:
\begin{itemize}
\item $T_\ell$ is not aligned with $S$:
since $k_{\ell-1}\ge 6$ one can verify (considering \cref{fig:pyramid_neighbors_strictly_downwards},\space\nolinebreak \cref{fig:pyramid_neighbors_same_level} and \cref{fig:pyramid_neighbors_strictly_upwards}) that $T_{\ell-1}$ is aligned with $T_{\ell}$. Since $T_{\ell-1}$ is aligned with $S$, and since alignment is transitive, $T_\ell$ must have been aligned with $S$ as well. This case can therefore not occur.
\item $k_\ell\not\in\{6,\ldots,n-6\}$:
considering the possible adjacencies in $G_n$ (see \cref{Def_theCliqueGraph}) we have $|k_{i-1}-k_i|\le 6$ which implies
$$6\ell\ge |k_0-k_\ell|=|n/2-k_\ell|\ge n/2-6 \;\implies\; \ell\ge n/12-1\ge n/24.$$
\item $T_\ell$ is not in $S\setminus N_G[\partial S]$:
this is equivalent to $\dist_G(T_\ell,\partial S)\le 1$.
By construction~we also have $\dist_G(T_0,\partial S)=n/6$.
But by considering the possible adjacencies in $G_n$ (see \cref{Def_theCliqueGraph}) we see $\dist_G(T_i,\partial S)\ge \dist_G(T_{i-1},\partial S)-2$. This implies
$$2\ell \ge \dist_G(T_0,\partial S)-\dist_G(T_\ell,\partial S)\ge n/6-1 \;\implies\; \ell\ge n/12-1/2\ge n/24.$$
\end{itemize}
\end{proof}
\fi
Since in our setting we have $G_n\cong k^n G$, and since $D(\,\cdot\,)$ is a graph invariant, we have $D(k^n G)=D(G_n)$. We can then conclude
\begin{cor}\label{cor_divergenceofdistance}
If $G$ contains $\Delta_n$ as a subgraph for $n\in 48\menge{N}$, then
$$D(k^n G)\in\big( \tfrac n{48}, \tfrac n6+1\big],$$
where $D(\,\cdot\,)$ is the graph invariant defined in \eqref{eq:invariant}.
In particular, if $G$ contains arbitrari\-ly large \triangularshaped\ subgraphs, then $D(k^n G)$ is unbounded as $n\to\infty$, and $G$~is~therefore clique divergent. \end{cor}
Together with \cite[Theorem 7.9]{BAUMEISTER2022112873} we conclude the characterisation of clique convergent triangularly simply connected locally cyclic graphs of minimum degree $\delta\geq 6$.
\begin{theoremX}{A}[Characterisation theorem for triangularly simply connected graphs]\label{cor_classificationpika}
A triangularly simply connected locally cyclic graph of minimum degree $\delta\geq 6$ is clique divergent\ if and only if it contains arbitrarily large \triangularshaped\ subgraphs.
\end{theoremX}
\section{Proof of Theorem B} \label{sec:proof_of_B}
In this section we prove \thmB. We need to recall basic facts about group actions and graph coverings, which we do in \cref{sec:proof_of_B_actions} and \cref{sec:proof_of_B_covers} below.
\subsection{Group Actions, \texorpdfstring{$\mathemph{\Gamma}$}{Gamma}-Isomorphisms, and Quotient Graphs} \label{sec:proof_of_B_actions}
We say that a group $\Gamma$ \textbf{acts} on a graph $G$ if we have a group homomorphism $\sigma:\Gamma\to\Aut(G)$. For every $\gamma\in \Gamma$ and every $v\in V(G)$, we define $\gamma v:=\sigma(\gamma)(v)$.
\note{group hom from gamma in aut G} The graph $G$ together with this action is called a \textbf{$\mathemph{\Gamma}$-graph}.
For every subgroup $\Gamma\leq \Aut(G)$, $G$ is a $\Gamma$-graph in a natural way. For two $\Gamma$-graphs $G$ and $H$, we call a graph isomorphism $\phi\colon G\to H$ a \textbf{$\mathemph{\Gamma}$-isomorphism}, if $\phi(\gamma v)=\gamma\phi(v)$ for each $v\in V(G)$ and each $\gamma\in \Gamma$. \begin{rem}\label{lem_clique_operator_keeps_equivariance}
If $G$ is a $\Gamma$-graph, so is $kG$ with respect to the induced action $\gamma Q=\{\gamma v\mid v\in Q\}$. Note that in \cite{larrion2000locally} this action is
denoted as the natural action of the group $\Gamma_{k}\leq \Aut(kG)$, which is isomorphic to $\Gamma$.
For a second $\Gamma$-graph $H$ and a $\Gamma$-isomorphism $\phi\colon G\to H$, the map $\mathemph{\phi_k}\colon kG\to kH,Q\mapsto \{\phi(v)\mid v\in Q\}$ is a $\Gamma$-isomorphism.
\end{rem}
We establish that our geometric construction of clique graphs is $\Gamma$-isomorphic as well.\note{umformulieren, sodass es keine aussge sondern ankündigung ist}
\begin{rem}\label{lem_C_is_equivariant} If a $\Gamma$-graph $G$ is locally cyclic, triangularly simply connected and of minimum degree $\delta\ge 6$, the action of $\Gamma$ on
$G$ induces an action on the \triangularshaped\ subgraphs\ of $G$ which makes the geometric clique graph $G_n$ into a $\Gamma$-graph~as~well.
Furthermore, the isomorphism $\mathemph{\psi_n}\colon G_n\to k^n G$ provided by \cref{res:structure_theorem} is a $\Gamma$-isomorphism.
This follows from the fact that the isomorphisms $\boldsymbol{C_n} \colon G_n\to kG_{n-1}$, which are explicitly constructed in \cite[Corollary 6.9]{BAUMEISTER2022112873}, are $\Gamma$-isomorphisms, and that $\psi_n$ can be written as the following chain of $\Gamma$-isomorphisms:
\begin{align*} G_n\xrightarrow{C_n} k G_{n-1}
&\xrightarrow{(C_{n-1})_k} k(k G_{n-2})=k^2 G_{n-2}
\xrightarrow{(C_{n-2})_{k^2}} k^2(k G_{n-3})=k^3 G_{n-3}
\\&\longrightarrow\cdots\longrightarrow k^{n-2}(k G_1)=k^{n-1} G_1\xrightarrow{(C_1)_{k^{n-1}}} k^{n-1}(kG)=k^n G. \end{align*}
\end{rem}
For any vertex $v\in V(G)$ of a $\Gamma$-graph $G$, we denote the orbit of $v$ under the action of $\Gamma$ by $\mathemph{\Gamma v}$. These orbits form the vertex set of the \textbf{quotient graph} $\mathemph{G/\Gamma}$, two of which are adjacent if they contain adjacent representatives. Note that if two graphs $G$ and $H$ are $\Gamma$-isomorphic, the quotient graphs $G/\Gamma$ and $H/\Gamma$ are isomorphic.
\subsection{Triangular Covers}\label{sec:universal_covers} \label{sec:proof_of_B_covers}
In the following, we transfer the convergence criterion of \thmA\ from the triangu\-lar\-ly simply connected case to the general case using the triangular covering maps from \cite{larrion2000locally}.
We define the topologically inspired term of ``triangular simple connectivity''
via the concept of walk homotopy.
As usual, a \textbf{walk of length $\mathemph{\ell}$} in a graph $G$ is a finite~sequence of vertices $\alpha=v_0\ldots v_\ell$ such that each pair $v_{i-1}v_i$ of consecutive vertices is adjacent. The vertex $v_0$ is called the \textbf{start vertex}, the vertex $v_\ell$ is called the \textbf{end vertex},
a walk is called \textbf{closed} if start and end vertex coincide,
and it is called \textbf{trivial} if it has length zero.
In order to define the homotopy relation on walks, we define four types of \textbf{elementary moves} (see also \cref{fig_elem_moves}). Given a walk that contains three consecutive vertices that form a triangle in $G$, the \textbf{triangle removal} shortens the walk by removing the middle one of them. Conversely, if a walk contains two consecutive vertices that lie in a triangle of $G$, the \textbf{triangle insertion} lengthens the walk by inserting the third vertex of the triangle between the other two. The \textbf{dead end removal} shortens a walk that contains a vertex twice with distance two in the walk by removing one of the two occurrences as well as the vertex between them. Conversely, the \textbf{dead end insertion} lengthens a walk by inserting behind one vertex an adjacent one and then the vertex itself again.
Note that elementary moves do not change the start and end vertices of walks, not even of closed ones.
\begin{figure}
\caption{Visualisations of the elementary moves.}
\label{fig_elem_moves}
\end{figure}
Two walks are called \textbf{homotopic} if it is possible to transform one into the other by performing a finite number of elementary moves. The graph $G$ is called \textbf{triangularly simply connected} if it is connected and if every closed walk is homotopic to a trivial one.
A \textbf{triangular covering map} is a homomorphism $p\colon \tilde{G} \to G$ between two connected graphs which is a local isomorphism, i.\,e.,\ the restriction $p\vert_{N[\tilde{v}]}\colon N[\tilde{v}]\to N[p(\tilde{v})]$ to the closed neighbourhood of any vertex $\tilde{v}$ of $\tilde{G}$ is an isomorphism and in this case, $\tilde{G}$ is called a \textbf{triangular cover} of $G$. The term ``triangular'' refers to the \textbf{unique triangle lifting property} which can be used as an alternative definition and is defined in \cref{appendix_b}. For a triangular covering map $p\colon \tilde{G} \to G$, we define the map $\mathemph{p_{k^n}}\colon k^n\tilde{G}\to k^n G$ which is constructed from $p$ recursively by $p_{k^0}=p$ and $p_{k^n}(\tilde{Q})=\{p_{k^{n-1}}(\tilde{v})\mid \tilde{v}\in \tilde{Q}\}$ for $n\geq 1$. By \cite[Proposition 2.2]{larrion2000locally}, $p_{k^n}$ is a triangular covering map, as well.
A triangular covering map $p\colon \tilde{G} \to G$ is called \textbf{universal} if $\tilde{G}$ is triangularly simply connected, and in this case $\tilde{G}$ is called the \textbf{universal (triangular) cover} of $G$. Note that every connected graph has a universal cover that is unique up to isomorphism. A proof can be found in \cite[Theorem 3.6]{rotman1973covering} or in the appendix in \cref{universal_corver_exandunique}.
For the following lemma, we need to use that triangular simple connectivity is preserved under the clique operator. This is proven in \cite{larrion2009fundamental},
but we also provide a elemen\-tary proof in the appendix in \cref{lem_cliqueoperator_preserves_simple_connectivty}.
\begin{lem}\label{conv_univ_cover}
If a connected graph $G$ is clique convergent, so is its universal triangular cover
$\tilde{G}$. \end{lem}
\begin{proof}
Let the clique operator be convergent on $G$, i.\,e.,\ there are $n,r\in \menge{N}$ such that $k^n G\cong k^{n+r}G$, and let $p\colon \tilde{G}\to G$ be a universal triangular covering map.
As $p_{k^n}$ and $p_{k^{n+r}}$ are triangular covering maps and $k^n\tilde{G}$ and $k^{n+r}\tilde{G}$ are triangularly simply connected by \cref{lem_cliqueoperator_preserves_simple_connectivty}, they are universal triangular covering maps. As the universal cover is unique up to isomorphism ( \cref{universal_corver_exandunique}),
$k^n\tilde{G}\cong k^{n+r}\tilde{G}$ and $\tilde{G}$ is clique convergent. \end{proof}
In the following, we show that for locally cyclic graphs with minimum degree $\delta\geq 6$ the converse implication is true as well. This has been stated in \cite{BAUMEISTER2022112873} as Lemma 8.8, but the proof contains a gap, as it does not show that $k^n\tilde{G}$ and $k^{n+r}\tilde{G}$ are $\Gamma$-isomorphic (in fact, this is still unknown if $\tilde G$ is a cover of a general graph $G$; see also \cref{q:covers}).
We will close this gap in the remainder of this section.
In order to do this, we need the definition of Galois maps. For a group $\Gamma$, we call a triangular covering map $p\colon \tilde{G}\to G$ \textbf{Galois with $\mathemph{\Gamma}$} if $\tilde{G}$ is a $\Gamma$-graph such that the vertex preimages of $p$ are exactly the orbits of the action, which implies $\tilde{G}/\Gamma\cong G$. By \cite[Proposition 3.2]{larrion2000locally}, if $p$ is Galois with $\Gamma$, so is $p_{k^n}$.
The following lemma is proven in \cite[Lemma 8.7]{BAUMEISTER2022112873}, but again, an elementary proof is provided in \cref{deck_trafo_group_galois}.
\begin{lem}[from {\cite[Lemma 8.7]{BAUMEISTER2022112873}}]\label{galois_1}
A universal triangular covering map $p\colon \tilde{G}\to G$ is Galois with $\Gamma\coloneqq \{\gamma\in \Aut(\tilde{G})\mid p\circ \gamma=p\}$, which is called the \textbf{deck transformation group} of $p$. Consequently, $(k^n\tilde{G})/\Gamma
\cong k^n G$. \end{lem}
We are now able to deduce the clique convergence of a graph from the clique convergence of its universal cover.
\begin{lem}\label{univ_cover_conv}
Let $G$ be a locally cyclic graph with minimum degree $\delta\geq 6$ and $\tilde{G}$ its~universal triangular cover.
If $\tilde{G}$ is clique convergent, then so is $G$.
\end{lem}
\begin{proof}
We start with the universal triangular cover $\tilde{G}$ being clique convergent.
By~\thmA, there is an $m\in \menge{N}$ such that $G$ does not contain $\Delta_m$ as a subgraph. Consequently, $\tilde{G}_{m-2}$ and $\tilde{G}_m$ are identical and thus $\Gamma$-isomorphic (for every $\Gamma$).
Let $\Gamma$ be the deck transformation group of the universal covering map $p\colon \tilde{G}\to G$. By \cref{galois_1}, this implies $k^n G\cong (k^n\tilde{G})/\Gamma$ for each $n\in \menge{N}_0$. Using the $\Gamma$-isomorphism $\psi_n\colon k^n\tilde{G}\to \tilde{G}_n$ from \cref{lem_C_is_equivariant},
we conclude that $G$ is clique convergent\ via
$$k^{m-2}G\cong (k^{m-2}\tilde{G})/\Gamma \cong \tilde{G}_{m-2}/\Gamma=\tilde{G}_{m}/\Gamma \cong (k^{m}\tilde{G})/\Gamma \cong k^{m}G.$$
\end{proof}
By joining \cref{conv_univ_cover}, \cref{univ_cover_conv}, and \thmA, we conclude the characterisation of clique convergent locally cyclic graphs with minimum degree $\delta\geq 6$.
\begin{theoremX}{B}[General characterisation theorem]
A (not necessarily finite) connected locally cyclic graph of minimum degree $\delta\geq 6$ is clique divergent\ if and only if its universal triangular cover contains arbitrarily large \triangularshaped\ subgraphs.
\end{theoremX}
\section{Conclusion and Further Questions} \label{sec:conclusions}
In this article, we completed
the characterisation of locally cyclic graphs of minimum degree $\delta\ge 6$ with a convergent clique dynamics, first in the triangularly simply connected case (\thmA) and then in the general case (\thmB).
Our findings turned out to be consistent with the geometric intuition from the finite case: the hexagonal lattice is clique divergent, as is any of its quotients. The finite analogues are the 6-regular triangulations of surfaces with Euler characteristic zero, which were known to be clique divergent\ by \cite{larrion1999clique,larrion2000locally}.
We are tempted to say that the hexagonal lattice is clique divergent\ because it has a ``flat geometry''.
\thmA\ may allow for a similar interpretation: if a triangularly simply connected locally cyclic graph $G$ of minimum degree $\delta\ge 6$ is clique divergent, then it contains arbitrarily large \triangularshaped\ subgraphs. As a consequence, vertices of degree $\ge 7$ cannot be distributed densely everywhere in $G$. Since degrees $\ge 7$ can be interpreted as a discrete analogue of negative curvature (we think of the 7-regular triangulation of the hyperbolic plane), a potential geometric interpretation of \thmA\ is that $G$ is clique divergent\ because it is ``close to being flat'' on large parts, which then dominate the clique dynamics.
To consolidate this interpretation, it would be helpful to shed more light on the lower degree analogues: locally cyclic graphs of minimum degree $\delta=5$ or even $\delta=4$. There however, the clique dynamics might be governed by different effects.
In a sense, it was surprising to find that for minimum degree $\delta\ge 6$, the asymptotic behaviour of the clique dynamics is determined only on the global scale, that is, by the presence or absence of subgraphs in $G$ from a relatively simple infinite family (the \triangularshaped\ graphs). Such a description should not be expected for smaller minimum degree: for $\delta \le 5$ there exist finite graphs that are clique divergent\ -- even simply connected ones -- and such clearly cannot contain ``arbitrarily large'' forbidden structures in any sense.
It might be worthwhile to first study triangulations of the plane of minimum degree $\delta= 5$ or $\delta= 4$, since those are not subject to the same argument of ``finite size''.\nolinebreak\space Yet,\nolinebreak\space as far as we are aware, it is already unknown which of the following graphs are clique divergent: consider a triangulated sphere of~minimum degree $\delta \in\{4,5\}$ (e.g.\ the octahedron or ico\-sahedron). Remove a vertex or edge together with all incident triangles -- which~leaves us with a triangulated disc -- and extend this to a triangulation of the Euclidean plane that is 7-regular outside the interior of the disc (see \cref{fig:disc_example}).
For all we know, it is at least conceivable that below minimum degree $\delta=6$ divergence can appear as a local phenomenon that does not require arbitrarily large ``bad regions''.
\begin{figure}
\caption{An ``almost 7-regular'' triangulation of the Euclidean plane, that is, it is 7-regular outside a small region.}
\label{fig:disc_example}
\end{figure}
For triangulations of closed surfaces (and further mild assumptions, see below), the most elementary open question is whether non-negative Euler characteristic already~implies clique-divergence. This has previously been conjecture by Larrión, Neumann-Lara and Pizaña \cite{larrion2002whitney}, and we shall repeat it here.
\begin{conj}
\label{conj:non_neg_Euler_diverges}
If a locally cyclic graph $G$ of minimum degree $\delta\ge 4$ triangulates a closed surface of Euler characteristic $\chi\ge 0$ (i.\,e.,\ a sphere, projective plane, torus or Klein bottle), then
$G$ is clique divergent. \end{conj}
To shed further light on the perceived connection between topology and clique dynam\-ics, the study of further topologically motivated generalisations appears worthwhile. We briefly mention two of them.
First, one could turn to higher-dimensional analogues, that is, triangulations of higher-dimensional manifolds and their 1-skeletons.
\begin{quest} Can something be said about when the clique dynamics of the triangulation of a manifold converges depending on the topology of the manifold? \end{quest}
The second generalisation is to allow for triangulations of surfaces \textit{with boundary}.\nolinebreak\space Such triangulations can be formalised as graphs for which each open neighbourhood is either a cycle (of length at least four) or a path graph -- we shall call them \textbf{locally cyclic with boundary}. Triangulations of bordered surfaces~have already received some attention: in \cite[Theorem 1.4]{larrion2013iterated} the authors show that, except~for~the disc, each compact surface (potentially with boundary) admits a clique divergent\ triangu\-lation. In contrast, they conjecture that discs do not have divergent triangulations:
\begin{conj} If a locally cyclic graph $G$ with boundary and of minimum degree $\delta\ge 4$ triangulates a disc, then it is clique convergent\ (actually, \textbf{clique null}, that is, it converges to the one-vertex graph). \end{conj}
This is known to be true if all interior vertices of the triangulation have degree $\ge 6$ \cite[Theorem 4.5]{larrion2003clique}.
Moving on from the topologically motivated investigations, yet another route is to~generalise from locally cyclic graphs of a particular minimum degree to graphs of a lower-bounded \textbf{local girth} (that is, the girth of each open neighbourhood is bounded from below). In fact, it has already been noted by the authors of \cite{larrion2002whitney} that their results apply not only to locally cyclic graphs of minimum degree $\ge 7$, but equally to general graphs of local girth $\ge 7$.
\begin{quest} Can the results for locally cyclic graphs of minimum degree $\delta\ge 6$ be generalized to graphs of local girth $\ge 6$? \end{quest}
Various other open questions emerge from the context of graph coverings. As we~have seen in \cref{conv_univ_cover}, if a graph $G$ is clique convergent, so is its universal triangular cover $\tilde G$. Even stronger: if $k^n G\cong G$, then $k^n\tilde G\cong \tilde G$. If $G$ is locally cyclic of~minimum degree $\delta\ge 6$, then conversely, by \cref{univ_cover_conv} convergence of $\tilde G$ implies convergence of $G$.
For general triangular covers $p\colon\tilde G\to G$ (between connected locally finite graphs)~how\-ever, such connections are not known.
If both $\smash{\tilde G}$ and $G$ are finite, then a straightforward pigeon hole argument shows that clique convergence of $G$ and of $\tilde G$ are equivalent. Yet, whether finite or infinite, it is generally unknown whether the statements $k^n G\cong G$ and $k^n\tilde G\cong \tilde G$ are always equivalent. We summarize all of this in the following question:
\begin{quest}\label{q:covers}
Let $p\colon\tilde{G}\to G$ be a triangular covering map between two connected~locally finite graphs. Is $\smash{\tilde G}$ clique convergent\ if and only if $G$ is clique convergent?
To~consider the directions separately, we ask:
\begin{myenumerate}
\item
Is there an analogue of \cref{conv_univ_cover} for non-universal covering maps: if $G$ is clique convergent\ but $p$ is not universal, is $\tilde G$ clique convergent\ as well?
\item
If $\tilde G$ is clique convergent, is $G$ clique convergent\ as well? \end{myenumerate}
An even stronger version of the question is: is $k^n\tilde G\cong \tilde G$ equivalent to $k^n G\cong G$ for every $n\in \menge{N}$? Is this at least true for finite graphs?
\end{quest}
\par
\parindent 0pt \textbf{Funding.} The second author was supported by the British Engineering and Physical Sciences Research Council [EP/V009044/1]
\par
\parindent 0pt \textbf{Acknowledgement.} We thank Markus Baumeister and Marvin Krings for their careful reading of the article and their many valuable comments.
\appendix
\section{The Clique Graph Operator and Simple Connectivity}\label{appendix_a}
In this section, we show that triangular simple connectivity is preserved under the clique graph operator.
A weaker version was obtained by Prisner \cite{PRISNER1992199} in 1992, who proved that the clique graph operator preserves the first $\menge{Z}_2$ Betti number. Larrión and Neumann-Lara \cite{larrion2000locally} then extended this in 2000 to the isomorphism type of the triangular fundamental group. An extension to more general graph operators (including the clique graph~operator and the line graph operator) was proven by {Larri{\'o}n, Piza{\~n}a, and Villarroel-Flores \cite{larrion2009fundamental} in 2009.
The proof presented here is completely elementary, as it explicitly constructs a sequence of elementary moves that transforms a given closed walk to the trivial one.
In order to be triangularly simply connected, the clique graph first needs to be connected.
\begin{lem}\label{lem_connectivity_is_preserved}
For a connected graph $G$, the clique graph $kG$ is also connected. \end{lem}
\begin{proof}
Let $Q,Q'\in V(kG)$ be two cliques of $G$. We choose two vertices $v\in Q$ and $v'\in Q'$. As $G$ is connected, there is a shortest walk $v_0\ldots v_\ell$ in $G$ connecting $v_0=v$ to $v_\ell=v'$. For each $i\in \{1,\ldots,\ell\}$ we choose a clique $Q_i$ that contains the pair of consecutive vertices $v_{i-1}$ and $v_i$ of this walk. Thus, for each $i\in \{1,\ldots,\ell-1\}$, the cliques $Q_{i}$ and $Q_{i+1}$ intersect in $v_{i}$ and they are distinct, as otherwise the vertices $v_{i-1}$ and $v_{i+1}$ would be adjacent, in contradiction to the minimality of the walk $v_0\ldots v_\ell$. Thus, $Q_1\ldots Q_\ell$ is a walk in $kG$. If $Q\neq Q_1$ we add $Q$ to the start of the walk and if $Q_{\ell}\neq Q'$ we append $Q'$. The resulting walk connects $Q$ and $Q'$ in $kG$ and, thus, $kG$ is connected. \end{proof}
We establish a concept of correspondence between a walk in $G$ and a walk in $kG$ in order to use the elementary moves that morph the former one into a trivial one as a guideline for doing the same with the latter one.
We say that a closed walk $\alpha$ in $G$ and a closed walk $\alpha'=Q_0\ldots Q_\ell$ in $kG$ with $Q_0=Q_{\ell}$ \textbf{correspond} if for each $i\in \{0,\ldots,\ell-1\} $ there is a walk $v_{i,0}\ldots v_{i,t_i}$ of length $t_i\in\menge{N}_0$ that lies completely in $Q_i$ and $\alpha$ is the concatenation of those walks, i.\,e.,\ $v_{i,t_i}=v_{i+1,0}$ for each $i\in \{0,\ldots,\ell-1\} $.
As $\alpha$ is closed, we the have $v_{0,0}=v_{\ell-1,t_{\ell-1}}=:v_{\ell,0}$.
\begin{figure}
\caption{The correspondence relation between a walk in $G$ and one in $kG$.}
\label{fig:correspond}
\end{figure}
Note that for every closed walk in $kG$ there is a corresponding one in $G$,
which is obtained as follows.
Let $\alpha'=Q_0\ldots Q_{\ell}$ with $Q_0=Q_\ell$ be a closed walk in $kG$.
For every $i\in \{1,\ldots,\ell\}$, we choose $w_i\in Q_{i-1}\cap Q_{i}$, we define $w_0:=w_\ell$, and we drop repeated consecutive vertices. This way, we obtain a walk $\alpha$ which clearly corresponds to $\alpha'$.
\begin{lem}\label{lem_cliqueoperator_preserves_simple_connectivty}
If $G$ is a triangularly simply connected graph, so is $kG$. \end{lem} \begin{proof}
Let $G$ be a triangularly simply connected graph. Thus, $G$ is connected and, by \cref{lem_connectivity_is_preserved}, so is $kG$.
Next, we show that every closed walk in $kG$ can be morphed to a single vertex by a sequence of elementary moves.
Let $\alpha'=Q_0\ldots Q_{\ell}$ with $Q_0=Q_\ell$ be a closed walk in $kG$. Let $\alpha$ be any corresponding walk in $G$, thus $\alpha$ consists of subwalks $v_{i,0}\ldots v_{i,t_i}$ as described above.
Since $G$ is triangularly simply connected,
there is a sequence of elementary moves from $\alpha$ to a trivial walk.
We now describe how we use the first of these moves as a guideline for elementary moves on $\alpha'$, for the other moves in the sequence, it works by induction.
Let $\beta$ be the walk in $G$ that is reached from $\alpha$ by the first move. We now perform two steps in order to construct a walk $\beta'$ in $kG$, which is
homotopic to $\alpha'$ and which corresponds to $\beta$.
The first step consists of repeated triangle removals and dead end removals on $\alpha'$ that preserve the correspondence to $\alpha$ until $\alpha'$ cannot be shortened any further in that way.
As no elementary move can change the start and end vertex of a walk, we do not remove $Q_0=Q_\ell$ this way. As for every $i\in \{1,\ldots,\ell-1\}$ with $t_i=0$, the clique $Q_i$ can be removed in a triangle or dead end removal, the only $t_i$ that can be zero is $t_0$.
For the second step, we distinguish two cases.
\ul{Case 1:} insertion moves. If the elementary move from $\alpha$ to $\beta$ is a triangle insertion or dead end insertion, let the indices $i\in \{0,\ldots,\ell-1\}$ and $j\in \{0,\ldots,t_i-1\}$ be chosen such that the additional one or two vertices are inserted between $v_{i,j}$ and $v_{i,j+1}$. For the triangle insertion, the subwalk $v_{i,0}\ldots v_{i,t_i}$ becomes $v_{i,0}\ldots v_{i,j}v^*v_{i,j+1} \ldots v_{i,t_i}$ and for the dead end insertion, it becomes $v_{i,0}\ldots v_{i,j}v^*v_{i,j}v_{i,j+1} \ldots v_{i,t_i}$.
If $v^*\in Q_i$, $\beta'\coloneqq \alpha'$ corresponds to $\beta$ and we are finished.
If $v^*\notin Q_i$, let $Q^*$ be a clique that contains $v^*,v_{i,j}$ and in the case of a triangle insertion also $v_{i,j+1}$. Then, the dead end inclusion of $Q^*$ and $Q_i$ behind $Q_i$ yields a walk $\beta'$. In the case of a dead end inclusion, it corresponds to $\beta$ because $v_{i,0}\ldots v_{i,j}$ and $v_{i,j}\ldots v_{i,t_i}$ lie in $Q_i$ and $v_{i,j}v^*v_{i,j}$ lies in $Q^*$.
In the case of a triangle inclusion, it corresponds to $\beta$ because
$v_{i,0}\ldots v_{i,j}$ and $v_{i,j+1}\ldots v_{i,t_i}$ lie in $Q_i$ and $v_{i,j}v^*v_{i,j+1}$ lies in $Q^*$.
\begin{figure}
\caption{The elementary move in $kG$ that corresponds to a dead end insertion (left) or triangle insertion (right) of a vertex which is not in $Q_i$.}
\label{fig:insertion}
\end{figure}
\ul{Case 2:} removal moves. If the elementary move from $\alpha$ to $\beta$ is a triangle removal or dead end removal, let the indices $i\in \{0,\ldots,\ell-1\}$ and $j\in \{0,\ldots,t_i-1\}$ be chosen such that $v_{i,j}$ (triangle removal) or $v_{i,j}$ and $v_{i,j+1}$ (dead end removal) are removed from $Q_i$.
This choice is possible, as the (first) removed vertex and its successor lie in a common $Q_i$.
If $j\geq 1$, the walk $\beta'=\alpha'$ corresponds to $\beta$ as $v_{i,0}\ldots v_{i,j-1}v_{i,j+1}\ldots v_{i,t_i}$ or $v_{i,0}\ldots v_{i,j-1}v_{i,j+2}\ldots v_{i,t_i}$ respectively, still lie in $Q_i$. In case of a dead end removal, this works even if $t_i=j+1$, as then $v_{i,j-1}=v_{i,j+1}=v_{i+1,0}$.
If $j=0$, we know that $i\neq 0$, as otherwise $v_{i,j}=v_{0,0}$ would be removed. Furthermore, we know that if $i=1$, $t_{0}\neq 0$ as this also would imply that $v_{0,0}=v_{1,0}$ is removed. In any case, $v_{i,j }$ lies between $v_{i-1,t_{i-1}-1}$ and $v_{i,1}$.
We now distinguish between two cases.
\ul{Case 2.1:} $v_{i-1,t_{i-1}-1}\notin Q_i$ and $v_{i,1}\notin Q_{i-1}$. As $v_{i,1}\in Q_i$, it is immediately clear that $v_{i-1,t_{i-1}-1}\neq v_{i,1}$, thus it is a triangle removal step and $v_{i-1,t_{i-1}-1}v_{i,0} v_{i,1}$ is a triangle.
Let $Q^*$ be a clique that contains $v_{i-1,t_{i-1}-1}$ and $v_{i,1}$. As $Q^*$ is neither $Q_{i-1}$ nor $Q_i$, the insertion of $Q^*$ between $Q_{i-1}$ and $Q_{i}$ is a triangle insertion and thus the resulting walk $\beta'$ is homotopic to $\alpha'$. Furthermore, $\beta$ and $\beta'$ correspond, because $v_{i-1,0}\ldots v_{i-1,t_{i-1}-1}$ lies in $Q_{i-1}$, $v_{i-1,t_{i-1}-1}v_{i,1}$ lies in $Q^*$ and $v_{i,1}\ldots v_{i,t_i}$ lies in $Q_i$.
\begin{figure}
\caption{The elementary move in $kG$ that corresponds to triangle removal in $G$.}
\label{fig:removal}
\end{figure}
\note{$v_{i-1,t_{i-1}-1}$ in figure}
\ul{Case 2.2:} $v_{i-1,t_{i-1}-1}\in Q_i$ or $v_{i,1}\in Q_{i-1}$. We start by assuming that $v_{i,1}\in Q_{i-1}$.
We subdivide $\alpha$ differently in pieces that each lie in one clique $Q_i$. Let $t_{i-1}':=t_{i-1}+1$, $t_i':=t_i-1$ and $t_{s}':=t_s$ for every $s\in \{0,\ldots,\ell-1\}\setminus \{i-1,i\}$.
Furthermore, let $v_{i-1,t_{i-1}'}':=v_{i,1}$, let
$v_{i,u}':=v_{i,u+1}$ for every $u\in\{0,\ldots,t_i'\}$,
and let $v_{s,u}':=v_{s,u}$
for every $s\in \{0,\ldots,\ell-1\}\setminus \{i-1,i\}$ and every $u\in \{0,\ldots,t_s'\}$. Now, the removed vertex is $v_{i-1,t_{i-1}'}'$ and as $t_{i-1}'\geq 1$ we are in a case we have already treated.
The step for $v_{i-1,t_{i-1}-1}\in Q_i$ is analogous.
After proceeding inductively for the other moves of the sequence, we reach a closed walk in $kG$ which corresponds to a trivial walk in $G$. Thus, all vertices of that walk in $kG$ are pairwise connected, as they all contain the single vertex of that trivial walk, and the walk can easily be morphed into a trivial one.
\end{proof}
\iffalse
Now that we have established correspondence between walks in $G$ and $kG$, we proceed to define a reduction process that does not change the walk in $G$ but applies elementary moves to the walk in $kG$ in order to reach a standard form we can work better with. The steps of the reduction process are described in \cref{rem:reduction}.
\todo{Can we instead work with the shortest $kG$-walk?}
\begin{rem}\label{rem:reduction}
Let $\alpha$ be a closed walk in $G$ and $\alpha'=Q_0\ldots Q_\ell$ with $Q_0=Q_{\ell}$ be a closed walk in $kG$ such that $\alpha$ and $\alpha'$ correspond. Further, let there be an $i\in \{0,\ldots,\ell-1\} $
and a $j_i\in \{0,\ldots,t_i\}$ such that $v_{i,0}\ldots v_{i,j_i}$ lies completely in $Q_{i-1}$ and $v_{i,j_i}\ldots v_{i,t_i}$ lies completely in $Q_{i+1}$. As $v_{i,j_i}\in Q_{i-1}\cap Q_{i+1}$, we have either that $Q_{i-1}=Q_{i+1}$ or $Q_{i-1}Q_{i+1}\in E(kG)$. In the former case, the walk $\beta'=Q_0\ldots Q_{i-1}Q_{i+2}\ldots Q_\ell$ is homotopic to $\alpha'$, as it is reached by a dead end removal, and it corresponds to $\alpha$ since the concatenation of $v_{i-1,0}\ldots v_{i-1,t_{i-1}}$, $v_{i,0}\ldots v_{i,t_{i}}$, and $v_{i+1,0}\ldots v_{i+1,t_{i+1}}$ lies completely in $Q_{i-1}$. In the latter case, the walk $\beta'=Q_0\ldots Q_{i-1}Q_{i+1}\ldots Q_\ell$ is homotopic to $\alpha'$, as it is reached by a triangle removal, and corresponds to $\alpha$ since the concatenation of $v_{i-1,0}\ldots v_{i-1,t_{i-1}}$ and $v_{i,0}\ldots v_{i,j_i}$ lies completely in $Q_{i-1}$ and the concatenation of $v_{i,j_i}\ldots v_{i,t_{i}}$ and $v_{i+1,0}\ldots v_{i+1,t_{i+1}}$ lies completely in $Q_{i+1}$. \end{rem}
\todo[inline]{Perhaps mention already here that every closed walk in $kG$ has a corresponding closed walk in $G$. Then Lemma A.4 is merely a corollary, no need for an elaborate proof.}
A pair of corresponding walks to which we can not apply \cref{rem:reduction} is called \textbf{reduced} and we can reduce every pair of corresponding walks by applying \cref{rem:reduction} finitely many times, because each reduction step shortens the walk in $kG$.
\begin{lem}\label{lem_mirroring_elementary_moves}
Let $\alpha$ be a closed walk in $G$ and $\alpha'=Q_0\ldots Q_\ell$ be a closed walk in $kG$ such that $\alpha$ and $\alpha'$ correspond. Furthermore, let $\beta$ be a walk in $G$ that is reached from $\alpha$ by applying a single elementary move. Then, there is a walk $\beta'$ in $kG$ which corresponds to $\beta$ and is homotopic to $\alpha'$. \end{lem}
\todo[inline]{ Shorter version of Lemma A.3: \\ Let $\alpha_i,i\in\{1,2\}$ be closed walks in $G$ and let $\alpha_i'$ be a closed walk in $kG$ that corresponds to $\alpha_i$. If $\alpha_1$ and $\alpha_2$ are homotopic, then so are $\alpha_1'$ and $\alpha_2'$. \\ OR \\ Let $\alpha,\beta$ be homotopic closed walks in $G$. If $\alpha'$ is a closed walk in $kG$ that corresponds to $\alpha$, then there exists an homotopic closed walk $\beta'$ in $kG$ that corresponds to $\beta$. }
\begin{proof}
\todo{Where is this used?}
Without loss of generality, we can assume that the pair $\alpha$ and $\alpha'$ is reduced, otherwise apply finitely meany reduction steps as described above.
\ul{Case 1: Insertion:} If the elementary move from $\alpha$ to $\beta$ is a triangle insertion or dead end insertion, let $v^*$ be the inserted vertex and let $i$ and $j_i$ be such that $v^*$ and possibly $v_{i,j_i}$ are inserted behind $v_{i,j_i}$ and $j_i<t_i$.
If $v^*\in Q_i$, $\beta'\coloneqq \alpha'$ corresponds to $\beta$ and we are finished.
If $v^*\notin Q_i$, let $Q^*$ be a clique that contains $v^*,v_{i,j_i}$ and in the case of a triangle insertion also $v_{i,j_i+1}$. Then, the dead end inclusion of $Q^*$ and $Q_i$ behind $Q_i$ yields a walk $\beta'$. In the case of a dead end inclusion, it corresponds to $\beta$ because $v_{i,0}\ldots v_{i,j_i}$ and $v_{i,j_i}\ldots v_{i,t_i}$ lie in $Q_i$ and $v_{i,j_i}v^*v_{i,j_i}$ lies in $Q^*$.
In the case of a triangle inclusion, it corresponds to $\beta$ because
$v_{i,0}\ldots v_{i,j_i}$ and $v_{i,j_i+1}\ldots v_{i,t_i}$ lie in $Q_i$ and $v_{i,j_i}v^*v_{i,j_i+1}$ lies in $Q^*$.
\ul{Case 2: Removal:} If the elementary move from $\alpha$ to $\beta$ is a triangle removal or dead end removal, let $i$ and $j_i$ be chosen such that $v_{i,j_i}$ and possibly $v_{i,j_i+1}$ are removed from $Q_i$, implying $j_i<t_i$.
If $j_i\geq 1$, the walk $\beta'=\alpha'$ corresponds to $\beta$ as $v_{i,0}\ldots v_{i,j_i-1}v_{i,j_i+1}\ldots v_{i,t_i}$ or $v_{i,0}\ldots v_{i,j_i-1}v_{i,j_i+2}\ldots v_{i,t_i}$ respectively, still lie in $Q_i$.
If $j_i=0$, we can assume that $v_{i-1,t_{i-1}-1}\notin Q_i$ and $v_{i,1}\notin Q_{i-1}$ as we can otherwise get into the previous situation by subdividing $\alpha$ in a way such that the three vertices in question are attributed to the same clique.
This also implies that it is a triangle removal as $v_{i-1,t_{i-1}-1}\neq v_{i,1}$.
Let $Q^*$ be a clique that contains $v_{i-1,t_{i-1}-1}$ and $v_{i,1}$. As $Q^*$ is neither $Q_{i-1}$ nor $Q_i$, the insertion of $Q^*$ between $Q_{i-1}$ and $Q_{i}$ is a triangle insertion and thus the resulting walk $\beta'$ is homotopic to $\alpha'$. Furthermore, $\beta$ and $\beta'$ correspond, because $v_{i-1,0}\ldots v_{i-1,t_{i-1}-1}$ lies in $Q_{i-1}$, $v_{i-1,t_{i-1}-1}v_{i,1}$ lies in $Q^*$ and $v_{i,1}\ldots v_{i,t_i}$ lies in $Q_i$.
\end{proof}
\fi
\section{Some Background on (Universal) Triangular Covers}\label{appendix_b}
In this section, we provide some background on triangular covering maps. We start with some preliminaries on walk homotopy in the preimage and image of a triangular covering map. After that, we spend the main part of this section
showing that the universal cover of a connected graph is unique up to isomorphism and covers every other triangular cover of a connected graph. Afterwards, we show that the universal covering map is Galois, i.\,e.,\ that it can be interpreted as factoring out a group of symmetries from a graph.
Most of the proofs are based on ideas from \cite{rotman1973covering}, but they only use basic concepts and they are much more concise as they use stronger prerequisites than the respective theorems in \cite{rotman1973covering} have.
We remark that every triangular covering map $p\colon \tilde{G}\to G$ fulfils the \textbf{unique edge lifting property}, i.\,e.,\ for each pair of adjacent vertices $v,w\in V(G)$ and each $\tilde{v}\in V(\tilde{G})$ such that $p(\tilde{v})=v$, there is a unique $\tilde{w}\in V(\tilde{G})$ such that $\tilde{v}$ and $\tilde{w}$ are adjacent and $p(\tilde{w})=w$. This property is equivalent to the \textbf{unique walk lifting property}, which says that for each walk $\alpha$ in $G$ and each preimage of its start vertex there is a unique walk $\tilde{\alpha}$ in $\tilde{G}$ which is mapped to $\alpha$. Furthermore, triangular covering maps fulfil the \textbf{triangle lifting property}, i.\,e.,\ for each triangle (i.\,e. $3$-cycle) $\{u,v,w\}$ in $G$ and each preimage $\tilde{u}$ of $u$, there exists a unique triangle $\{\tilde{u}, \tilde{v}, \tilde{w}\}$ in $\tilde{G}$ that is bijectively mapped to $\{u,v,w\}$. Lastly, it follows from the unique walk lifting property that every triangular covering map between two connected graphs is surjective.
Throughout this section, we repeatedly make use of the following lemma connecting triangular covering maps and homotopy of walks.
\begin{lem}[{\cite[Lemma 2.2]{rotman1973covering}}]\label{lem:homotopiclifts}
Given a triangular covering map $p\colon \tilde{G} \to G$ and two homotopic walks $\alpha=v_0\ldots v_\ell$ and $\beta=v'_0\ldots v_{\ell'}'$ in $G$, for a fixed vertex $\tilde{v}$ from the preimage of their common start vertex $v_0=v_0'$ the unique walks $\tilde{\alpha}=\tilde{v}_0\ldots\tilde{v}_\ell$ with $p(\tilde{v}_i)=v_i$ and $\tilde{\beta}=\tilde{v}_0'\ldots\tilde{v}_{\ell'}'$ with $p(\tilde{v}_i')=v_i'$ are homotopic as well. Especially, they have the same end vertex $\tilde{v}_\ell=\tilde{v}_{\ell'}'$. \end{lem}
\begin{proof}
As homotopy is defined by a finite sequence of elementary moves, it suffices to show that an elementary move in the image implies an elementary move in the preimage.
Thus, let $\alpha=v_0\ldots v_\ell$ be a walk in $G$ and let $\tilde{\alpha}=\tilde{v}_0\ldots\tilde{v}_\ell$ be from its preimage with $p(\tilde{v}_i)=v_i$.
Let $\beta$ be reached from $\alpha$ by inserting a vertex $v^*$ and possibly $v_i$ again between $v_{i}$ and $v_{i+1}$ for some $i\in \{0,\ldots,\ell-1\}.$
As lifting a walk is done vertex by vertex from start to end, the lift of $\beta$ begins with the vertices $\tilde{v}_0$ to $\tilde{v}_{i}$.
As the restriction of $p$ to the neighbourhood of $v_{i-1}$ is an isomorphism, the lift of $\beta$ starting in $\tilde{v}_0$ still has $\tilde{v}_{i+1}$ as the preimage of $v_{i+1}$ and consequently the lift of $\beta$ agrees with that of $\alpha$ in all following vertices.
Thus, the lift of $\beta$ arises from the lift of $\alpha$ by inserting a vertex $\tilde{v}^*$ and possibly $\tilde{v}_i$ between $\tilde{v}_{i}$ and $\tilde{v}_{i+1}$, which is an elementary move. For the elementary moves that remove vertices, exchange $\alpha$ and $\beta$. \end{proof}
Next, we show that every connected graph has a universal triangular cover. The proof of the following lemma is influenced by a combination of \cite[Theorem 2.5, 2.8, and 3.6]{rotman1973covering}.
\begin{lem}\label{lem:existence_simply_connected_cover}
For every connected graph $G$, there is a universal triangular covering map $p\colon \tilde G \to G$, i.\,e.,\ a triangular covering map with a triangularly simply connected graph~$\tilde{G}$. \end{lem}
\begin{proof}
We give a construction for a graph $\tilde{G}$ and a map $p$ and we show that $p$ is in fact a triangular covering map, that $\tilde{G}$ is connected and that $\tilde{G}$ is triangularly simply connected.
\ul{Construction of $\tilde G$ and $p$:} We fix a vertex $v$ of $G$. For each walk $\alpha$, we denote by $[\alpha]$ its homotopy class, i.\,e.,\ the set of walks that can be reached from $\alpha$ by a finite sequence of elementary moves. A walk $\beta$ is called a continuation of a walk $\alpha$ if $\beta$ arises from $\alpha$ by appending exactly one vertex to its end. Now we can define the graph $\tilde{G}$ by
\begin{align*}
V(\tilde{G})&=\{[\alpha]\mid \alpha \text{ is a walk in $G$ starting at vertex } v\}\\
E(\tilde{G})&=\{[\alpha][\beta]\mid \beta \text{ is a continuation of }\alpha \}
\end{align*}
Note that $[\alpha][\beta]\in E(\tilde{G})$ does not imply that $\beta$ is a continuation of $\alpha$, but
there is a $\beta'\in [\beta]$ such that $\beta'$ is a continuation of $\alpha$.
We define $$p\colon \tilde{G}\to G, [\alpha]\mapsto \fin(\alpha),$$ in which $\fin(\alpha)$ is the end vertex of $\alpha$. The map $p$ is well defined as homotopic walks have the same start and end vertex.
\ul{Triangular covering map:} For an edge $[\alpha][\beta]\in E(\tilde{G})$, let without loss of generality $\beta$ be a continuation of $\alpha$. Thus, the end vertices of the two walks are adjacent and $p$ is a graph homomorphism. Next we show that the restriction of $p$ to neighbourhoods is bijective. Thus, let $[\alpha_w]$ be a class of walks from $v$ to some vertex $w$. As noted above, the neighbourhood of $[\alpha_w]$ consists of the classes of continuations of $\alpha_w$ to the neighbours of $w$. Especially, the restriction of $p$ to the neighbourhoods of $[\alpha_w]$ and $w$ respectively is bijective. Let now be $\alpha_x$ and $\alpha_y$ be the continuations of $\alpha_w$ by two distinct neighbours $x$ and $y$ of $w$. As we have already shown that the adjacency of $[\alpha_x]$ and $[\alpha_y]$ implies the adjacency of $x$ and $y$, it remains to show the reverse. Thus, let $x$ and $y$ be adjacent. Hence, we can construct the walk $\alpha_y'$ as the continuation of $\alpha_x$ by the vertex $y$, thus, $[\alpha_x]$ and $[\alpha_y']$ are adjacent. Since
$\alpha_y'$ is reached from $\alpha_y$ by he elementary move of inserting $x$ between $w$ and $y$, they are homotopic and thus also $[\alpha_y]$ is adjacent to $[\alpha_x]$.
\ul{Connectivity:} We show that every vertex $[\alpha]$ is connected to the trivial walk $\alpha_v$ that consists only of the vertex $v$. Thus, let $\alpha$ be any walk in $G$. The vertices $[\alpha_v]$ and $[\alpha]$ are connected by the walk $[\beta_0]\ldots[\beta_\ell]$ in $\tilde{G}$, where $\ell$ is the length of $\alpha$
and $\beta_i$ is the initial subwalk of length $i$ of $\alpha$.
\ul{Triangular simple connectivity:} For a closed walk $[\alpha_0]\ldots[\alpha_{\ell}]$ with $[\alpha_0]=[\alpha_\ell]$ in $\tilde{G}$, we can assume without loss of generality that $\alpha_{i}$ is a continuation of $\alpha_{i-1}$ for each $i\in \{1,\ldots,\ell\}.$ Furthermore, we can assume that $\alpha_0$ is the trivial walk as all the walks $\alpha_0,\ldots,\alpha_{\ell}$ coincide with $\alpha_0$ on their initial subwalks, anyway. We prove that the closed $[\alpha_0]\ldots[\alpha_{\ell}]$ and the trivial walk $[\alpha_0]$ are homotopic.
As $\alpha_0$ and $\alpha_{\ell}$ are homotopic, there is a finite sequence of elementary moves that morphs $\alpha_{\ell}$ into $\alpha_0$.
To each walk $\alpha'$ in $G$ that occurs in that homotopy between $\alpha_0$ and $\alpha_\ell$, we associate the walk $[\alpha'_0]\ldots[\alpha'_{\ell'}]$ where $\ell'$ is the length of $\alpha'$ and $\alpha'_i$ is the initial subwalk of length $i$ of $\alpha'$. This is a walk by construction and it fulfils $\alpha'_0=\alpha_0$ and $\alpha'_{\ell'}=\alpha'$. This way, we associate the final (trivial) walk $\alpha_0$ to the trivial walk $[\alpha_0]$.
If the walks $\alpha'$ and $\alpha''$ are connected by an elementary move in $G$, their associated walks in $\tilde{G}$ are connected by the corresponding elementary move in the following way. A triangle insertion move that inserts $v^*$ after $v_i$ corresponds to the insertion of the class of the continuation of $\alpha_i$ by $v^*$ and changing the representative of the following classes to the one, in which $v^*$ is inserted after $v_i$. The other elementary moves work analogously.
\end{proof}
In the next lemma, we show that universal triangular covers are in fact universal objects. The proof is a combination of special cases from the proofs of \cite[Theorem 3.2 and Theorem 3.3]{rotman1973covering}.
\begin{lem}\label{lem:simple_connected_is_universal}
The universal triangular covering map $p\colon \tilde{G}\to G$ fulfils the following universal property:
for each triangular covering map $q\colon \bar{G}\to G$ there exists a triangular covering map $\tilde{q}\colon \tilde{G}\to \bar{G}$ such that $p=q\circ \tilde{q}$ (see the commuting diagram in \cref{fig:commdig}).
Furthermore,
for any pair of fixed vertices $\tilde{v}\in\tilde{G}$ and $\bar{v}\in \bar{G}$ such that $p(\tilde{v})=q(\bar{v})$ we get a unique triangular covering map $\tilde{q}_{\tilde{v},\bar{v}}\colon \tilde{G}\to \bar{G}$ with $p=q\circ \tilde{q}_{\tilde{v},\bar{v}}$ and $\tilde{q}_{\tilde{v},\bar{v}}(\tilde{v})=\bar{v}$. \end{lem}
\begin{figure}
\caption{The commuting diagram depicting the
property from \cref{lem:simple_connected_is_universal}.}
\label{fig:commdig}
\end{figure}
\begin{proof}
Let $p\colon \tilde{G}\to G$ be a triangular covering map such that $\tilde{G}$ is triangularly simply connected and let $q\colon \bar{G}\to G$ be any triangular covering map.
We fix a vertex $v\in V(G)$ as well as vertices $\tilde{v}\in V(\tilde{G})$ and $\bar{v}\in V(\bar{G})$ that are in the preimage of $v$ under $p$ and $q$, respectively.
We construct $\tilde{q}_{\tilde{v},\bar{v}}$ from $p$ and $q$ and show that it is in fact a well-defined triangular covering map.
\ul{Construction of $\tilde{q}_{\tilde{v},\bar{v}}$:}
For each $\tilde{u}\in V(\tilde{G})$, we choose a walk $\alpha_{\tilde{v},\tilde{u}}$ from $\tilde{v}$ to $\tilde{u}$. The image of $\alpha_{\tilde v,\tilde{u}}$ under $p$ is a walk, which we call $\beta_{\tilde{u}}$, from $p(\tilde{v})$ to $p(\tilde{u})$. As $p(\tilde{v})=v=q(\bar{v})$, by the unique walk lifting property, there is exactly one walk $\alpha_{\bar{v},\bar{u}}$ starting at $\bar{v}$ that is mapped to $\beta_{\tilde{u}}$ by $q$. We define $\tilde{q}_{\tilde{v},\bar{v}}(\tilde{u})$ to be the end vertex $\bar{u}$ of $\alpha_{\bar{v},\bar{u}}$.
\ul{Well-Definedness:} We need to show that $\tilde{q}_{\tilde{v},\bar{v}}(\tilde{u})$ is independent of the choice of the walk $\alpha_{\tilde{v},\tilde{u}}$. Thus, let $\alpha'_{\tilde{v},\tilde{u}}$ be a different walk from $\tilde{v}$ to $\tilde{u}$. Its image under $p$ is called $\beta'_{\tilde{u}}$ which has the same start and end vertices as $\beta_{\tilde{u}}$. As $\tilde{G}$ is triangularly simply connected, the walks $\alpha_{\tilde{v},\tilde{u}}$ and $\alpha'_{\tilde{v},\tilde{u}}$ are homotopic and, consequently, so are $\beta_{\tilde{u}}$ and $\beta'_{\tilde{u}}$. By
\cref{lem:homotopiclifts} also the preimages under $q$, which are called $\alpha_{\bar{v},\bar{u}}$ and $\alpha'_{\bar{v},\bar{u}}$, are homotopic and, thus, have the same end vertex, implying $\tilde{q}_{\tilde{v},\bar{v}}$ being well defined. Additionally, $p=q\circ\tilde{q}_{\tilde{v},\bar{v}}$ holds by construction.
\ul{Homomorphy:} Let $\tilde{x},\tilde{y}$ be adjacent vertices in $\tilde{G}$.
Let $\alpha_{\tilde{v},\tilde{y}}$ be a walk from $\tilde{v}$ to $\tilde{y}$ such that $\tilde{x}$ is its penultimate vertex.
Via the same construction as above, we obtain a walk $\alpha_{\bar{v},\bar{y}}$ such that its penultimate vertex $\bar{x}$ fulfils $p(\bar{x})=q(\tilde{x})$.
Consequently, $\tilde{q}_{\tilde{v},\bar{v}}(\tilde{x})=\bar{x}$ and $\tilde{q}_{\tilde{v},\bar{v}}(\tilde{y})=\bar{y}$ are adjacent and thus $\tilde{q}_{\tilde{v},\bar{v}}$ is a graph homomorphism.
\ul{Triangular covering map:} Let $\tilde{u}$ be a vertex of $\tilde{G}$ and let $u=p(\tilde{u})$ and $\bar{u}=\tilde{q}_{\tilde{v},\bar{v}}(\tilde{u})$ be its images. As $p\vert_{N[\tilde{u}]}\colon N[\tilde{u}]\to N[u]$ and $q\vert_{N[\bar{u}]}\colon N[\bar{u}]\to N[u]$ are isomorphism, so is $\tilde{q}_{\tilde{v},\bar{v}}\vert_{N[\tilde{u}]}=q\vert_{N[\bar{u}]}^{-1}\circ p\vert_{N[\tilde{u}]}$.
\ul{Uniqueness of $\tilde{q}_{\tilde{v},\bar{v}}$:} Let $\tilde{q}\colon \tilde{G}\to \bar{G}$ be any triangular covering map such that $p=q\circ\tilde{q}$ and $\tilde{q}(\tilde{v})=\bar{v}$. With the definitions from above, both the image of $\alpha_{\tilde{v},\tilde{u}}$ under $\tilde{q}$ and $\alpha_{\bar{v},\bar{u}}$ are lifts of the walk $\beta_{\tilde{u}}$ and they share the start vertex $\bar{v}$. By the unique walk lifting property, they are equal and so is their end vertex, implying
$\tilde{q}(\tilde{u})=\bar{u}=\tilde{q}_{\tilde{v},\bar{v}}(\tilde{u})$. \end{proof}
\begin{lem}\label{lem:universal_property_unique}
If for a graph $G$ there are two graphs $\tilde{G}$ and $\bar{G}$ and two triangular covering maps $p\colon \tilde{G}\to G$ and $q:\bar{G}\to G$ such that $p$ and $q$ both fulfil the universal property from \cref{lem:simple_connected_is_universal}, $\tilde{G}$ and $\bar{G}$ are isomorphic. \end{lem}
\begin{proof}
Let $p\colon \tilde{G}\to G$ and $q\colon \bar{G}\to G$ be two triangular covering maps which both fulfil the universal property. Furthermore, let $\tilde{v}\in V(\tilde{G})$ and $\bar{v}\in V(\bar{G})$ be chosen such that $p(\tilde{v})=q(\bar{v})$. By the universal properties, there are (unique) triangular covering maps $\tilde{p}\colon\bar{G}\to \tilde{G}$ and $\tilde{q}\colon \tilde{G}\to \bar{G}$ such that $p=q\circ\tilde{q}$, $\tilde{q}(\tilde{v})=\bar{v}$, $q=p\circ\tilde{p}$, and $\tilde{p}(\bar{v})=\tilde{v}$. Consequently, $p=p\circ \tilde{p}\circ\tilde{q}$ and $(\tilde{p}\circ\tilde{q})(\tilde{v})=\tilde{v}$. As the identity map $id\colon \tilde{G}\to\tilde{G}$ is a triangular covering map that fulfils $p=p\circ id$ and $id(\tilde{v})=\tilde{v}$, we know by the uniqueness of the universal property of $p$ that $\tilde{p}\circ\tilde{q}=id$, which implies that $\tilde{q}\colon \tilde{G}\to \bar{G}$ is an isomorphism. \end{proof}
\begin{theo}\label{universal_corver_exandunique}
Every connected graph has a universal triangular cover, which is unique up to isomorphism. \end{theo}
\begin{proof}
By \cref{lem:existence_simply_connected_cover}, the graph $G$ has a universal triangular cover.
Let $p\colon \tilde{G}\to G$ and $q\colon \bar{G}\to G$ be two universal triangular covering maps.
By applying \cref{lem:simple_connected_is_universal}, they both have the universal property. By \cref{lem:universal_property_unique}, the universal triangular covers are isomorphic. \end{proof}
Now we can look at the universal triangular cover through the lens of quotient graphs by using Galois covering maps. We reprove this lemma from \cite{BAUMEISTER2022112873} using only basic notation.
\begin{lem}\label{deck_trafo_group_galois}
A universal triangular covering map $p\colon \tilde{G}\to G$ is Galois with $\Gamma\coloneqq \{\gamma\in \Aut(\tilde{G})\mid p\circ \gamma=p\}$, which is called the \textbf{deck transformation group} of $p$. Moreover, it holds that $(k^n\tilde{G})/\Gamma
\cong k^n G$.
\end{lem}
\begin{proof}
As each $\gamma\in \Gamma$ fulfils $p\circ \gamma=p$, the group $\Gamma$ acts on every vertex preimage of $p$ individually. Thus, it suffices to show that
for each pair of vertices $\tilde{v},\tilde{w}$ with $p(\tilde{v})=p(\tilde{w})$ there is a $\gamma\in \Gamma$ such that $\gamma(\tilde{v})=\tilde{w}$.
If we apply \cref{lem:simple_connected_is_universal} with $q=p$, we get a triangular covering map $\tilde{q}_{\tilde{v},\tilde{w}}$ which maps $\tilde{v}$ to $\tilde{w}$ and which is an isomorphism by
\cref{universal_corver_exandunique}, thus $\gamma=\tilde{q}_{\tilde{v},\tilde{w}}$ fulfils the condition.
As $p$ is a Galois covering map, by \cite[Proposition 3.2]{larrion2000locally} so is $p_{k^n}$. Consequently, it holds that $(k^n\tilde{G})/\Gamma
\cong k^n G$.
\end{proof}
\iffalse
\section{Locality of the clique operator}
Let $\overline N_G^r(v)\coloneqq \{w\in V(G)\mid \dist(v,w)\le r\}$. Two graphs $G_1$ and $G_2$ are \textbf{$r$-locally~isomorphic} at vertices $v_i\in V(G_i)$ (we write $(G_1,v_1)\cong_r(G_2, v_2)$) if $(N_{G_1}^r(v_1),v_1)\cong (N_{G_2}^r(v_2),v_2)$ as pointed graphs.
For each graph $G$ exists a canonical map $\iota\colon V(G)\to V(k^2 G)$.
We show that the clique operator is a local operator:
\begin{prop} \label{res:clique_properties} Let $G$ be a graph, $v\in V(G)$ a vertex and $S\subseteq V(G)$ a clique with $v\in S$. Then the following holds: \begin{enumerate}[label=(\roman*)]
\item if $S'\in \bar N_{kG}^r(S)$ then $S'\subseteq \bar N_G^{r+1}(v)$.
\item if $S\subseteq \bar N_G^r(v)$ and $S$ is a clique in $\bar N_G^{r+1}(v)$, then $S$ is also a clique in $G$. \end{enumerate} \end{prop}
\begin{proof} For (i), fix a vertex $v'\in S'$ and a shortest path $S=S_0\sim \cdots \sim S_r=S'$. Choose $v_i\in S_{i-1}\cap S_{i}$ for all $i\in\{1,\ldots,r\}$. Then $v\sim v_1\sim\cdots\sim v_{r}\sim v'$ is a path of length $r+1$, or shorter if $v=v_1$ or $v_r=v'$. Thus, $v'\in \bar N_G^{r+1}(v)$.
For (ii) suppose that $S$ is not a clique in $G$, hence there is $v'\in V(G)\setminus \bar N_G^{r+1}(v)$ adjacent to all vertices of $S$. But since $S\subseteq \bar N_G^r(v)$, its neighbours are already in $\bar N_G^{r+1}(v)$, which is a contradiction. \end{proof}
\begin{prop} Given graphs $G_1,G_2$, vertices $v_i\in V(G_i)$ and cliques $S_i\in V(kG_i)$ with $v_i\in S_i$, then
$$(G_1,v_1)\cong_{r+2} (G_2, v_2)\quad\implies\quad (kG_1, S_1)\cong_r (k G_2, S_2).$$
\end{prop}
\begin{proof} Let $\phi\colon \bar N_{G_1}^{r+2}(v_1)\to \bar N_{G_2}^{r+2}(v_2)$ be an isomorphism of pointed graphs. We claim~that the induced map $\phi\colon S\mapsto \{\phi(v)\mid v\in S\}$ is the required isomorphism between $\bar N_{kG_1}^r(S_1)$ and $\bar N_{kG_2}^r(S_2)$. The challenge is to show that $\phi$ maps cliques onto cliques, and specifically onto cliques in $\bar N_{kG_2}^r(S_2)$. The preservation of adjacency is clear.
Fix $\smash{S_1'\in \bar N_{kG_1}^r(S_1)}$. Then $\smash{S_1'\subset\bar N_{G_1}^{r+1}(v_1)}$ by \cref{res:clique_properties} (i). Since $S_1'$ is a clique in $\smash{\bar N_{G_1}^{r+1}(v_1)\subseteq \bar N_{G_1}^{r+2}(v_1)}$ and isomorphisms preserve cliques, $\phi(S_1')$ is also a clique~in~$\bar N_{G_2}^{r+2}(v_2)$. It then also follows that $\phi(S_1')$ is a clique in $G$ by \cref{res:clique_properties} (ii), hence is a vertex of $kG_2$. \color{red}Clearly, $\phi$ sends $\bar N_{G_1}^{r}(v_1)$ onto $\bar N_{G_2}^{r}(v_2)$, and we have $\phi(S_1')\subset \bar N_{G_2}^{r}(v_1)$. \end{proof}
A \textbf{triangular patch} in $G$ is a neighbourhood $\bar N_G^r(v)$ of a vertex $v\in V(G)$ that is~$r$-locally isomorphic to the triangular lattice graph.
\begin{theo}
If $G$ contains arbitrarily large triangular patches, then $G$ is clique divergent. \end{theo}
\begin{proof} \textcolor{red}{TODO} \end{proof}
\begin{ex} The statement of this theorem is subtle. The following three versions do not work:
\begin{itemize}
\item arbitrarily large triangular-shaped subgraphs
\item arbitrarily large triangular-shaped induced subgraphs
\item arbitrarily large triangular-shaped convex subgraphs
\item arbitrarily large subgraphs isomorphic to subgraphs of the triangular lattice. \end{itemize} \end{ex}
\section{The triangular lattice is not a clique graph}
A subgraph chain $H\subset H'\subset G$ is \textbf{full} if every vertex $v \in V(H)$ has the same degree in $H'$ as it has in $G$.
\begin{lem} $H_2\subset H_5\subset kG$ with $H_i\cong\Delta_i$ is never full. \end{lem}
\begin{proof} Let $\Delta\subset H_2$ be a triangle with vertices $C_1,C_2,C_3\subset G$. We claim that $C_1\cap C_2\cap C_3\not=\emptyset$. Otherwise, by the adjacencies in $\Delta$, there are
$$ v_{12}\in (C_1\cap C_2)\setminus C_3,\quad v_{23}\in (C_2\cap C_3)\setminus C_1,\quad v_{13}\in (C_1\cap C_3)\setminus C_2. $$
Then $G[\{v_1,v_2,v_3\}]$ is a complete subgraph, in particular, is part of a $G$-clique $C\not\in V(\Delta)$ that is adjacent (in $kG$) to all vertices of $\Delta$. This is impossible since $H_2\subset H_5\subset kG$~is full.
\begin{figure}
\caption{sth}
\label{fig:sth}
\end{figure}
Thus, for each triangle $\mathcal D_i$ in the figure above we can choose vertices of $G$
$$ v_1\in C_1\cap C_{12}\cap C_{13},\quad v_2\in C_2\cap C_{12}\cap C_{23},\quad v_3\in C_3\cap C_{23}\cap C_{13}. $$
Then $G[\{v_1,v_2,v_3\}]$ is a complete subgraph, in particular, is part of a $G$-clique~\mbox{$C\not\in V(H_2)$} that is adjacent (in $kG$) to all vertices of $H_2$. This is impossible since $H_2\subset H_5\subset kG$ is full. \end{proof}
\subsection{Clique initial graphs}
Let $\mathcal G$ be the directed graph whose vertices are finite graphs, and an arc from $G$ to $G'$ (not necessarily distinct) if $G'\cong kG$. A subgraph $\mathcal H\le \mathcal G$ is initial if it does not contain a directed cycle or a directed two-way infinite path.
A graph $G$ is \textbf{clique predictable} if $G$ is not isomorphic to a clique graph, and if for all graphs $H,H'$ with $H\cong k^nG$ and $kH'\cong H$ holds $H'\cong k^{n-1}G$.
\begin{theo} If $G$ contains arbitrarily large clique predictable graphs as full subgraphs, then $G$ is clique divergent. \end{theo}
\begin{proof} \textcolor{red}{TODO} \end{proof}
\fi
\end{document} | arXiv |
\begin{document}
\title{Convergence of Probability Measures and Markov Decision Models with Incomplete Information}
\begin{center} Eugene~A.~Feinberg \footnote{Department of Applied Mathematics and Statistics,
Stony Brook University, Stony Brook, NY 11794-3600, USA, [email protected]},\ Pavlo~O.~Kasyanov\footnote{Institute for Applied System Analysis, National Technical University of Ukraine ``Kyiv Polytechnic Institute'', Peremogy ave., 37, build, 35, 03056, Kyiv, Ukraine,\ [email protected].},\ and Michael~Z.~Zgurovsky\footnote{National Technical University of Ukraine ``Kyiv Polytechnic Institute'', Peremogy ave., 37, build, 1, 03056, Kyiv, Ukraine,\
[email protected] }\\
\end{center}
\centerline{\emph{This article is dedicated to 80th birthday of Academician Albert Nikolaevich Shiryaev}}
\begin{abstract} This paper deals with three major types of convergence of probability measures on metric spaces: weak convergence, setwise converges, and convergence in the total variation. First, it describes and compares necessary and sufficient conditions for these types of convergence, some of which are well-known, in terms of convergence of probabilities of open and closed sets and, for the probabilities on the real line, in terms of convergence of distribution functions. Second, it provides criteria for weak and setwise convergence of probability measures and continuity of stochastic kernels in terms of convergence of probabilities defined on the base of the topology generated by the metric. Third, it provides applications to control of Partially Observable Markov Decision Processes and, in particular, to Markov Decision Models with incomplete information. \end{abstract}
\section{Introduction} \label{S1} This paper deals with convergence of probability measures and relevant applications to control of stochastic systems with incomplete state observations. Convergence of probability measures and control of stochastic systems under incomplete information are among the areas to which Albert Nikolayevich Shiryaev has made fundamental contributions. In particular, convergence of probability measures and limit theorems for stochastic processes were studied in his joint papers with his distinguished students Yuri Mikhailovich
Kabanov and Robert Shevilevich
Liptser (e.g., \cite{KLSh}) and in his monograph with Jean Jacod~\cite{JSh}. Control of stochastic processes with incomplete information was the major topic of his two influential papers \cite{Sh1, Sh2}, and this topic is related to his monograph with Liptser \cite{LSh} on statistics of stochastic processes.
In Section~\ref{S2} of this paper we describe three major types of convergence of probability measures defined on metric spaces: weak convergence, setwise convergence, and convergence in the total variation. In addition to the definitions, we provide two groups of mostly known results: characterizations of these types of convergence via convergence of probability measures of open and closed sets, and, for probabilities on a real line,
via convergence of distribution functions. In section~\ref{S3} we describe criteria for weak and setwise convergences in terms of convergence of probabilities of the elements of a countable base of the topology. Section~\ref{3A} deals with continuity of transition probabilities. In particular, Theorem~\ref{mainthkern} describes sufficient conditions for a probability measure, defined on a product of two spaces and depending on a parameter, to have a transition probability satisfying certain continuity properties. This result can be interpreted as a sufficient condition for continuity in Bayes's formula.
Section~\ref{S4} describes recent results on optimization of Partially Observable Markov Decision Processes (POMDPs) from Feinberg et al.~\cite{FKZ} as well as new results. Section~\ref{S5} describes an application of the results from Sections~\ref{3A} and \ref{S4} to a particular class of POMDPs, that we call Markov Decision Models with Incomplete Information ({MDMIIs}). The difference between a POMDP and an MDMII is that for a POMDP the states of the system and observations are related via a stochastic kernel, called an observation stochastic kernel, while for an MDMII the state of the system is a vector, consisting of $(m+n)$ coordinates, of which $m$ coordinates are observable and $n$ coordinates are not observable. MDMIIs were studied mainly in early publications including in Aoki~\cite{Ao}, Dynkin~\cite{Dy}, Shiryaev~\cite{Sh2}, Hinderer~\cite{Hi}, Savarigi and Yoshikava~\cite{SY}, Rhenius~\cite{Rh}, Rieder~\cite{Ri}, Yushkevich~\cite{Yu}, Dynkin and Yushkevich~\cite{DY}, and B\"auerle and Rieder~\cite{BR}, while POMDPs were studied by Bertsekas and Shreve~\cite{BS}, Hern\'andez-Lerma~\cite{HL}, and in many later publications.
Feinberg et al.~\cite{FKZ} described sufficient conditions for the existence of optimal policies, validity of optimality equations, and convergence of value iterations to optimal values for POMDPs
with standard Borel state, action, and observation spaces and for MDMIIs
with standard Borel state and action spaces; see also conference and seminar proceedings \cite{FKZ1, FKZ2}. In both cases, the goal is
either to minimize the expected total costs, with the one-step cost
function being nonnegative, or to minimize the expected total discounted cost, with the one-step cost function being bounded below.
For POMDPs these sufficient conditions
are: $K$-inf-compactness of the cost function, weak continuity of
the transition stochastic kernel, and continuity in the total variation of
the observation stochastic kernel. These results are described in Section~\ref{S4} as well as sufficient conditions for weak continuity of transition probabilities for a COMDP from Feinberg et al.~\cite{FKZ} in terms of the transition function $H$ in the filtering equation~(\ref{3.1}). In this paper we introduce sufficient conditions in terms of joint distributions of posteriory distributions and observations; see Theorem~\ref{teor:Rtotvar}. The notion of $K$-inf-compactness of a function defined on a graph of
a set-valued map was introduced in Feinberg et al.~\cite{FKN}.
Though an MDMII is a particular case of
an
POMDP, there is no observation stochastic kernel in the definition of an
MDMII. However, the observation stochastic kernel can be defined for an MDMII in a natural way, and this definition
transforms an MDMII into a POMDP, but in this
POMDP the defined observation stochastic kernel is not continuous in the
total variation. Feinberg et al.~\cite{FKZ} described additional equicontinuity
conditions on the stochastic kernels of MDMIIs, under which optimal
policies exist, optimality equations hold, and value iterations converge to optimal values. By using results from Sections~\ref{3A} and \ref{S4}, in Section~\ref{S5}
we strengthen the results from Feinberg et al.~\cite{FKZ} on MDMIIs by providing weaker assumptions
on transition probabilities than the assumptions introduced in Feinberg et al.~\cite{FKZ}.
\section{Three types of convergence of probability measures}\label{S2} Let $\mathbb{S}$ be a metric space and ${\mathcal B}(\mathbb{S})$ be its Borel $\sigma$-field, that is, the $\sigma$-field generated by all open subsets of the metric space $\mathbb{S}$. For $S\in\mathcal{B}( \mathbb{S})$ denote by ${\mathcal B}(S)$ the $\sigma$-field whose elements are intersections of $S$ with elements of ${\mathcal B}(\mathbb{S})$. Observe that $S$ is a metric space with the same metric as on $\mathbb{S}$, and ${\mathcal B}(S)$ is its Borel $\sigma$-field. For a metric space $\mathbb{S}$, denote by $\mathbb{P}(\mathbb{S})$ the \textit{set of probability measures} on $(\mathbb{S},{\mathcal B}(\mathbb{S})).$ A sequence of probability measures $\{P_n\}_{n=1,2,\ldots}$ from $\mathbb{P}(\mathbb{S})$ \textit{converges weakly (setwise)} to $P\in\mathbb{P}(\mathbb{S})$ if for any bounded continuous (bounded Borel-measurable) function $f$ on $\mathbb{S}$ \[\int_\mathbb{S} f(s)P_n(ds)\to \int_\mathbb{S} f(s)P(ds) \qquad {\rm as \quad }n\to\infty. \] We write $P_n\Wc P$ ($P_n\Sc P$) if the sequence $\{P_n\}_{n=1,2,\ldots}$ from $\mathbb{P}(\mathbb{S})$ converges weakly (setwise) to $P\in\mathbb{P}(\mathbb{S}).$ The definition of Lebesgue-Stiltjes integrals implies that $P_n\Sc P$ if and only if $P_n(E)\to P(E)$ for each $E\in{\cal B}(\mathbb{S})$ as $n\to\infty.$
The following two theorems are well-known.
\begin{theorem}\label{t1} {\rm (Shiryaev~\cite[Theorem 1, p. 311]{Sh}).} The following statements are equivalent:
(i) $P_n\Wc P;$
(ii) $\liminf_{n\to\infty} P_n(\mathcal{O})\ge P(\mathcal{O})$ for each open subset $\mathcal{O}\subseteq\mathbb{S};$
(iii) $\limsup_{n\to\infty} P_n(C)\le P(C)$ for each closed subset $C\subseteq \mathbb{S}.$ \end{theorem}
Let $\mathbb{R}^1$ be a real line with the Euclidean metric. For a $P,P_n\in \mathbb{P}(\mathbb{R}^1)$ define the distribution functions $F(x)=P\{(-\infty,x] \}$ and $F_n(x)=P_n\{(-\infty,x] \},$ $x\in\mathbb{R}^1.$ \begin{theorem}\label{t2} {\rm (Shiryaev~\cite[Theorem 2, p. 314]{Sh}).} For $\mathbb{S}=\mathbb{R}^1$ the following statements are equivalent:
(i) $P_n\Wc P;$
(ii) $F_n(x)\to F(x)$ for all points $x\in\mathbb{R}^1$ of continuity of the distribution function $F$. \end{theorem}
The following theorem provides for setwise convergence the results in the same spirit as Theorem~\ref{t1} states for weak convergence.
\begin{theorem}\label{t3} The following statements are equivalent:
(i) $P_n\Sc P;$
(ii) $\lim_{n\to\infty} P_n(\mathcal{O})= P(\mathcal{O})$ for each open subset $\mathcal{O}\subseteq\mathbb{S};$
(iii) $\lim_{n\to\infty} P_n(C)= P(C)$ for each closed subset $C\subseteq \mathbb{S}.$ \end{theorem} \begin{proof} If $A$ is open (closed) then its complement $A^c$ is closed (open), and $Q(A^c)=1-Q(A)$ for each $Q\in\mathbb{P}(\mathbb{S}).$ Thus statements (ii) and (iii) are equivalent. We prove the equivalence of (i) and (iii). Obviously, (i) implies (iii). According to Billingsley \cite[Theorem~1.1]{Bil} or Bogachev \cite[Theorem 7.1.7]{bogachev}, any probability measure $P$ on a metric space $\mathbb{S}$ is regular, that is, for each $B\in \mathcal{B}(\mathbb{S})$ and for each $\varepsilon>0$ there exist a closed subset $C\subseteq\mathbb{S}$ and an open subset $\mathcal{O}\subseteq \mathbb{S}$ such that $C\subseteq B\subseteq \mathcal{O}$ and $P(\mathcal{O}\setminus C)<\varepsilon$.
Fix arbitrary $B\in \mathcal{B}(\mathbb{S})$
and $\varepsilon>0$. Since $P_n(\mathcal{O})\to P(\mathcal{O})$ and $P_{n}(C)\to P (C)$, there exists $N=1,2,\ldots,$ such that $|P_n(\mathcal{O})-
P(\mathcal{O})|<\varepsilon$ and $|P_n(C)-P (C)|<\varepsilon$ for any $n= N,N+1,\ldots$. Therefore, $P_n(B)-P(B)\le P_n(\mathcal{O})- P(B)< \varepsilon + P(\mathcal{O}\setminus C)<2\varepsilon$, and $P(B)-P_n(B)\le P(B)-P_n(C)< \varepsilon+P(\mathcal{O}\setminus C)<2\varepsilon$, for each $n=N,N+1,\ldots$. Since $\varepsilon>0$ is arbitrary, the sequence $\{P_n(B)\}_{n=1,2,\ldots} \subset [0,1]$ converges to $P(B)$ for any $B\in \mathcal{B}(\mathbb{S})$, that is, the sequence of probability measures $\{P_n\}_{n=1,2,\ldots}$ converges setwise to $P\in\mathbb{P}(\mathbb{S})$.
\end{proof}
According to Bogachev \cite[Theorem 8.10.56]{bogachev}, which is Pflanzagl's generalization of the Fichtengolz-Dieudonn\'e-Grothendiek theorem, the statement of Theorem~\ref{t3} holds for Radon measures. In view of Bogachev \cite[Theorem 7.1.7]{bogachev}, if $\mathbb{S}$ is complete and separable, then any probability measure on $(\mathbb{S},{\mathcal B}(\mathbb{S}))$ is Radon.
However, Theorem~\ref{t3} does not assume that $\mathbb{S}$ is either separable or complete.
If $P_n\Sc P$, where $P, P_n\in\mathbb{P}(\mathbb{R}^1)$ for all $n=1,2,\ldots,$ then $F_n(x) \to
F(x)$ and $F_n(x-)\to F(x-)$ for all $x\in\mathbb{R}^1.$ This is true because $F_n(x) = P_n((-\infty,x])\to P((-\infty,x])= F(x)$ and $F_n(x-) = P_n((-\infty,x))\to P((-\infty,x))= F(x-)$ as $n\to\infty.$ However, as the following example shows, the convergences $F_n(x) \to
F(x)$ and $F_n(x-)\to F(x-)$ for all $x\in\mathbb{R}^1$ do not imply $P_n\Sc P.$
\begin{example}\label{exa:DFS}(Convergences $F_n(x) \to
F(x)$ and $F_n(x-)\to F(x-)$ $\forall x\in\mathbb{R}^1$ do not imply $P_n\Sc P$). {\rm Let \[ F_0(x):=\left\{ \begin{array}{ll} 0,& x<0;\\ x, & 0\le x \le 1;\\ 1, & x>1; \end{array} \right. F_{n+1}(x):=\left\{ \begin{array}{ll} \frac12F_n(3x),& x<\frac13;\\ \frac12, & \frac13 \le x\le \frac23;\\ \frac12F_n(3x-2), & x>\frac23; \end{array} \right. F(x):=\left\{ \begin{array}{ll} 0,& x<0;\\ C(x), & 0 \le x\le 1;\\ 1, & x>1; \end{array} \right. \] where $C(x)$ is the Cantor function and $n=0,1,\ldots\ .$ Note that $F(x)$ and $F_n(x)$, $n=0,1,\ldots,$ are continuous functions and \[
\max_{x\in\mathbb{R}^1}\left|F(x)-F_n(x) \right|\le 2^{1-n}\max_{x\in\mathbb{R}^1}\left|F_1(x)-F_0(x) \right|,\quad n=1,2,\ldots\ . \] Therefore, $F_n(x-)=F_n(x)\to
F(x)= F(x-)$ for each $ x\in\mathbb{R}^1$.
Denote by $C\subset [0,1]$ the Cantor set. Since the Lebesgue measure of the Cantor set $C$ equals zero and each distribution function $F_n$ has a bounded density, $P_n(C)=0$ for each $n=1,2,\ldots.$ Note that $P(C)=1$ because $P([0,1])=F(1)-F(0)=1$ and $P([0,1]\setminus C)=0$ since $[0,1]\setminus C$ is a union of disjoint open interval each of zero $P$-measure. Thus, the sequence of probability measures $\{P_n\}_{n=1,2,\ldots}$ does not converges setwise to the probability measure $P$.
$\Box$ } \end{example}
The third major type of convergence of probability measures, convergence in the total variation, can be defined via a metric $\rho_{tv}$ on $\mathbb{P}(\mathbb{S})$ called the distance in the total variation. For $P,Q\in \mathbb{P}(\mathbb{S}),$ define \begin{equation}\label{eqdefdist}
{\rho_{tv}}(P,Q):=\sup\left\{|\int_\mathbb{S} f(s)P(ds)-\int_\mathbb{S} f(s)Q(ds)| : \ f:\mathbb{S}\to [-1,1]\mbox{ is Borel-measurable} \right\}. \end{equation} A sequence of probability measures $\{P_n\}_{n=1,2,\ldots}$ from $\mathbb{P}(\mathbb{S})$ converges in the total variation to $P\in\mathbb{P}(\mathbb{S})$ if $\lim_{n\to\infty}{\rho_{tv}}(P_n,P)=0.$
In view of the Hahn decomposition, there exists $E\in{\cal B}(\mathbb{S})$ such that $(P-Q)(B)\ge 0$ for each $B\in {\cal B}(E)$ and $(P-Q)(B)\le 0$ for each $B\in {\cal B}(E^c).$ According to Shiryaev~\cite[p. 360]{Sh},
\begin{equation}\label{eq2shir}{\rho_{tv}}(P,Q)=P(E)-Q(E)+Q(E^c)-P(E^c)=2\sup \{|P(B)-Q(B)|:B\in{\cal B}(\mathbb{S})\}.\end{equation}
This implies that the supremum in (\ref{eqdefdist}) is achieved at the function $f(s)={\bf I}\{s\in E\}-{\bf I}\{s\in E^c\},$ and \begin{equation}\label{eqdefdist1}{\rho_{tv}}(P,Q)=\sup\left\{\int_\mathbb{S} f(s)P(ds)-\int_\mathbb{S} f(s)Q(ds) : \ f:\mathbb{S}\to \{-1,1\}\mbox{ is Borel-measurable} \right\}.\end{equation} Since $(P-Q)(\mathbb{S})=0$, (\ref{eq2shir}) also implies \begin{equation}\label{eq2shir1} {\rho_{tv}}(P,Q)=2P(E)-2Q(E)=2Q(E^c)-2P(E^c)= 2\max \{P(B)-Q(B):B\in{\cal B}(\mathbb{S})\}. \end{equation}
Consider the positive part $(P-Q)^+$ and negative part $(P-Q)^-$ of $(P-Q)$, that is, $(P-Q)^+(B) = (P-Q)(E\cap B)$ and $(P-Q)^-(B)=-(P-Q)(E^c\cap B)$ for all $B\in\mathcal{B}(\mathbb{S})$. Both $(P-Q)^+$ and $(P-Q)^-$ are nonnegative finite measures. As follows from (\ref{eq2shir1}), \begin{equation}\label{eq:shir3} {\rho_{tv}}(P,Q)=2(P-Q)^+(E)=2(P-Q)^-(E^c). \end{equation}
The statements of Theorem~\ref{t5}(i,ii) characterize convergence in the total variation via convergence of the values of the measures on open and closed subsets in $\mathbb{S}.$ In this respect, these statements are similar to Theorems~\ref{t1} and \ref{t3}, which provide characterizations for weak and setwise convergences. Formula~(\ref{eq2shir}) indicates that convergence in the total variation can be interpreted as uniform setwise convergence. The same interpretation follows from Theorems~\ref{t3} and \ref{t5}(i, ii). Theorem~\ref{t5}(iii, iv) indicates that convergence in the total variation can be also interpreted as uniform weak convergence.
\begin{theorem}\label{t5} The following equalities hold for $P,Q\in \mathbb{P}(\mathbb{S})$:
(i) ${\rho_{tv}}(P,Q)=2\sup\{|P(C)-Q(C)|:C\ {\rm is\ closed\ in}\ \mathbb{S}\}=2\sup\{P(C)-Q(C):C\ {\rm is\ closed\ in}\ \mathbb{S}\};$
(ii) ${\rho_{tv}}(P,Q)=2\sup\{|P(\mathcal{O})-Q(\mathcal{O})|:\mathcal{O}\ {\rm is\ open\ in}\ \mathbb{S}\}=2\sup\{P(\mathcal{O})-Q(\mathcal{O}):\mathcal{O}\ {\rm is\ open\ in}\ \mathbb{S}\};$
(iii) ${\rho_{tv}}(P,Q)=\sup\left\{\int_\mathbb{S} f(s)P(ds)-\int_\mathbb{S} f(s)Q(ds) : \ f:\mathbb{S}\to [-1,1]{\rm\ is\ continuous} \right\};$
(vi) ${\rho_{tv}}(P,Q)=\sup\left\{|\int_\mathbb{S} f(s)P(ds)-\int_\mathbb{S} f(s)Q(ds)| : \ f:\mathbb{S}\to [-1,1]{\rm\ is\ continuous} \right\}.$ \end{theorem} \begin{proof} (i) It is sufficient to show that \begin{equation}\label{eq:tv(i,ii)} {\rho_{tv}}(P,Q)\le 2\sup\{P(C)-Q(C):C\ {\rm is\ closed\ in}\ \mathbb{S}\}. \end{equation} Since $(P-Q)^+$ is a measure on a metric space, it is regular; Billingsley \cite[Theorem~1.1]{Bil} or Bogachev \cite[Theorem 7.1.7]{bogachev}. Thus, for $E\in \mathcal{B}(\mathbb{S})$ satisfying (\ref{eq:shir3}) and for each $\varepsilon>0$ there exists a closed subset $C\subseteq \mathbb{S}$ such that $C\subseteq E$ and $2(P-Q)^+(E\setminus C)<\varepsilon$.
Due to $C\subseteq E,$ the equality $(P-Q)(C)=(P-Q)^+(C)$ holds.
Therefore, in view of
(\ref{eq:shir3}), \[ {\rho_{tv}}(P,Q)< 2(P-Q)^+(C)+\varepsilon\le 2\sup\{P(C)-Q(C):C\ {\rm is\ closed\ in}\ \mathbb{S}\}+\varepsilon. \] Since $\varepsilon>0$ is an arbitrary, inequality (\ref{eq:tv(i,ii)}) holds.
(ii) Since of ${\rho_{tv}}(P,Q)={\rho_{tv}}(Q,P)$ and \[ \sup\{P(C)-Q(C):C\ {\rm is\ closed\ in}\ \mathbb{S}\}=\sup\{Q(\mathcal{O})-P(\mathcal{O}):\mathcal{O}\ {\rm is\ open\ in}\ \mathbb{S}\}, \] (i) implies (ii).
(iii) In view of (\ref{eqdefdist1}), it is sufficient to show that \begin{equation}\label{eq:tv(iii)} {\rho_{tv}}(P,Q)\le \sup\left\{\int_\mathbb{S} f(s)P(ds)-\int_\mathbb{S} f(s)Q(ds) : \ f:\mathbb{S}\to [-1,1]{\rm\ is\ continuous} \right\}. \end{equation}
Since the supremum in (\ref{eqdefdist}) is achieved at the function $f_{E,E^c}(s)={\bf I}\{s\in E\}-{\bf I}\{s\in E^c\},$ \begin{equation}\label{eq:tv(iii)(1)} {\rho_{tv}}(P,Q)=\int_\mathbb{S} f_{E,E^c}(s)(P-Q)(ds). \end{equation}
Since of $(P-Q)^+$ and $(P-Q)^-$ are measures on a metric space, they are regular; Billingsley \cite[Theorem~1.1]{Bil} or Bogachev \cite[Theorem 7.1.7]{bogachev}. Thus, for $E, E^c\in \mathcal{B}(\mathbb{S})$ and for each $\varepsilon>0,$ there exist closed subsets $C_1,C_2\subseteq \mathbb{S}$ such that $C_1\subseteq E$, $C_2\subseteq E^c$, and $(P-Q)^+(E\setminus C_1)+(P-Q)^-(E^c\setminus C_2)<\varepsilon$. Therefore, \begin{equation}\label{eq:tv(iii)(2)} \int_\mathbb{S} f_{E,E^c}(s)(P-Q)(ds)\le \int_\mathbb{S} f_{C_1,C_2}(s)(P-Q)(ds)+\varepsilon, \end{equation} where $f_{C_1,C_2}(s)={\bf I}\{s\in C_1\}-{\bf I}\{s\in C_2\}$, $s\in \mathbb{S}$. Note that the restriction of $f_{C_1,C_2}$ on a closed subset $C_1\cup C_2$ in $\mathbb{S}$ is continuous. Since a metric space is a normal topological space, Tietze-Urysohn-Brouwer extension theorem implies the existence of a continuous extension of $f_{C_1,C_2}$ on $\mathbb{S}$, that is, there is a continuous function $\tilde{f}_{C_1,C_2}:\mathbb{S}\to [-1,1]$ such that $\tilde{f}_{C_1,C_2}(s)=f_{C_1,C_2}(s)$ for any $s\in C_1\cup C_2$. Thus, \begin{equation}\label{eq:tv(iii)(3)} \int_\mathbb{S} f_{C_1,C_2}(s)(P-Q)(ds)\le \int_\mathbb{S} \tilde{f}_{C_1,C_2}(s)(P-Q)(ds)+\varepsilon. \end{equation}
According to (\ref{eq:tv(iii)(1)})--(\ref{eq:tv(iii)(3)}), for any $\varepsilon>0$ \[ {\rho_{tv}}(P,Q)\le \sup\left\{\int_\mathbb{S} f(s)P(ds)-\int_\mathbb{S} f(s)Q(ds) : \ f:\mathbb{S}\to [-1,1]{\rm\ is\ continuous} \right\}+2\varepsilon, \] which yields inequality (\ref{eq:tv(iii)}).
(iv) According to (iii) and the definition of ${\rho_{tv}}(P,Q)$, \[ \begin{aligned} &{\rho_{tv}}(P,Q)=\sup\left\{\int_\mathbb{S} f(s)P(ds)-\int_\mathbb{S} f(s)Q(ds) : \ f:\mathbb{S}\to [-1,1]{\rm\ is\ continuous} \right\}\le \\
&\sup\left\{|\int_\mathbb{S} f(s)P(ds)-\int_\mathbb{S} f(s)Q(ds)| : \ f:\mathbb{S}\to [-1,1]{\rm\ is\ continuous} \right\}\le {\rho_{tv}}(P,Q), \end{aligned} \] which implies (iv). \end{proof}
For a function $f$ on $\mathbb{R}$, let $V(f)$ denote its total variation. Let $P_i,$ $i=1,2,$ be probability measures on $(\mathbb{R}^1,\mathcal{B}(\mathbb{R}^1)),$ and $F_i(x)=P_i\{(-\infty,x]\},$ $x\in\mathbb{R}^1,$ be the corresponding distribution functions. The following well-known statement characterizes convergence in the total variation in terms of convergence of distribution functions.
\begin{theorem}\label{t:totvar2} {\rm (Cohn \cite[Exercise~6, p.~137]{Cohn}).} ${\rho_{tv}}(P_1,P_2)=V(F_1-F_2)$ for all $P_1,P_2\in\mathbb{P}(\mathbb{R}^1).$ \end{theorem}
\section{Sufficient Conditions for Weak and Setwice Convergence}\label{S3}
\begin{lemma}\label{l:1} Let $\{P_n\}_{n=1,2,\ldots}$ be a sequence of probability measures from $\mathbb{P}(\mathbb{S})$ and $P\in\mathbb{P}(\mathbb{S})$. If for a measurable subset $B$ of $\mathbb{S}$ there is a countable sequence of measurable subsets $B_1,B_2,\ldots$ of $B$ such that:
(i) $B=\cup_{i=1}^\infty B_j,$
(ii) $\liminf_{n\to\infty}P_n(\cup_{j=1}^k B_j)\ge P(\cup_{j=1}^k B_j)$ for all $k=1,2,\ldots,$
\noindent then \begin{equation}\label{eq3.1nn}\liminf_{n\to\infty} P_n(B)\ge P(B).\end{equation} \end{lemma} \begin{proof} For an arbitrary $\epsilon>0$ consider an integer $k(\epsilon)$ such that $P(\cup_{j=1}^{k(\epsilon)} B_j)\ge P(B)-\epsilon.$ Then \[ \liminf_{n\to\infty} P_n(B)\ge \liminf_{n\to\infty} P_n(\cup_{j=1}^{k(\epsilon)} B_j)\ge P(\cup_{j=1}^{k(\epsilon)} B_j) \ge P(B)-\epsilon. \] Since $\epsilon>0$ is arbitrary, inequality (\ref{eq3.1nn}) holds.\end{proof}
\begin{corollary}\label{c:1} Let $\{P_n\}_{n=1,2,\ldots}$ be a sequence of probability measures from $\mathbb{P}(\mathbb{S})$ and $P\in\mathbb{P}(\mathbb{S})$. If for a each open subset $\mathcal{O}$ of $\mathbb{S}$ there is a countable sequence of measurable subsets $B_1,B_2,\ldots$ of $\mathcal{O}$ such that:
(i) $\mathcal{O}=\cup_{i=1}^\infty B_j,$
(ii) $\liminf_{n\to\infty}P_n(\cup_{j=1}^k B_j)\ge P(\cup_{j=1}^k B_j)$ for all $k=1,2,\ldots,$
\noindent then \noindent then $P_n\Wc P$. \end{corollary} \begin{proof} In view of Lemma~\ref{l:1}, $\liminf_{n\to\infty} P_n(\mathcal{O})\ge P(\mathcal{O})$ for all open subsets $\mathcal{O}$ of $\mathbb{S}.$ In view of Theorem~\ref{t1}, this is equivalent to $P_n\Wc P$. \end{proof}
\begin{theorem}\label{t:1} Let $\{P_n\}_{n=1,2,\ldots}$ be a sequence of probability measures from $\mathbb{P}(\mathbb{S})$ and $P\in\mathbb{P}(\mathbb{S})$. If the topology on $\mathbb{S}$ has a countable base $\tau_b,$ then $P_n\Wc P$ if and only if $\liminf_{n\to\infty}P_n(\mathcal{O}^*)\ge P(\mathcal{O}^*)$ for each finite union $\mathcal{O}^*=\cup_{i=1}^k {\mathcal{O}}_{i}$ with $\mathcal{O}_{i}\in\tau_b,$ $k=1,2,\ldots\ .$
\end{theorem} \begin{proof} Since $P_n\Wc P$ if an only if $\liminf_{n\to\infty}P_n(\mathcal{O})\ge P(\mathcal{O})$ for each open $\mathcal{O}\subseteq\mathbb{S},$ the necessary condition is obvious. The sufficient part follows from Corollary~\ref{c:1}, because any open subset $\mathcal{O}$ of $\mathbb{S}$ can be represented as $\mathcal{O}^*=\cup_{i=1}^\infty {\mathcal{O}}_{i}$ with $\mathcal{O}_{i}\in\tau_b,$ $i=1,2,\ldots\ .$ \end{proof}
Lemma~\ref{l:1} can be used to formulate the following criterion for setwise convergence.
\begin{lemma}\label{l:2} Let $\{P_n\}_{n=1,2,\ldots}$ be a sequence of probability measures from $\mathbb{P}(\mathbb{S})$ and $P\in\mathbb{P}(\mathbb{S})$. Then the following statements hold:
(i) If for a measurable subset $C$ of $\mathbb{S},$ both sets $B=C$ and $B=C^c$, where $C^c=\mathbb{S}\setminus C$ is the complement of $C,$ satisfy the conditions of Lemma~\ref{l:1}, then $P_n(C)\to P(C).$
(ii) If for each open subset $\mathcal{O}\subseteq\mathbb{S},$ both sets $B=\mathcal{O}$ and its complement $B=\mathcal{O}^c$ satisfy conditions (i) and (ii) of Lemma~\ref{l:1}, then $P_n\Sc P.$
\end{lemma} \begin{proof} (i) Lemma~\ref{l:1} implies that $\liminf_{n\to\infty} P_n(C)\ge P(C)$ and $\liminf_{n\to\infty} P_n(C^c)\ge P(C^c).$ Since $P$ and $P_n,$ $n=1,2,\ldots$ are probability measures, $\lim_{n\to\infty} P_n(C)= P(C).$ (ii) In view of (i), $P_n(\mathcal{O})\to P(\mathcal{O})$ for each open subset $\mathcal{O}$ of $\mathbb{S}.$ In view of Theorem~\ref{t3}, $P_n\Sc P.$ \end{proof}
For setwise convergence the following theorem states the conditions similar to the conditions of Theorem~\ref{t:1} for weak convergence.
\begin{theorem}\label{t:2} Let $\{P_n\}_{n=1,2,\ldots}$ be a sequence of probability measures from $\mathbb{P}(\mathbb{S})$ and $P\in\mathbb{P}(\mathbb{S})$. If the topology on $\mathbb{S}$ has a countable base $\tau_b,$ then $P_n\Sc P$ if and only if the following two conditions hold:
(i) $\liminf_{n\to\infty}P_n(\mathcal{O}^*)\ge P(\mathcal{O}^*)$ for each finite union $\mathcal{O}^*=\cup_{i=1}^ k {\mathcal{O}}_{i}$, where $\mathcal{O}_{i}\in\tau_b,$ $k=1,2,\ldots;$
(ii) each closed subset $B\subseteq \mathbb{S}$ satisfies conditions (i) and (ii) of Lemma~\ref{l:1}.
\end{theorem} \begin{proof} Let $\mathcal{O}$ be an arbitrary open subset of $\mathbb{S}.$ In view of (i), Theorem~\ref{t1} implies that $\liminf_{n\to\infty} P_n(\mathcal{O})\ge P(\mathcal{O}).$ In view of (ii), Lemma~\ref{l:1} implies that $\liminf_{n\to\infty} P_n(\mathcal{O}^c)\ge P(\mathcal{O}^c).$ Thus $\lim_{n\to\infty} P_n(\mathcal{O})= P(\mathcal{O}).$ Since $\mathcal{O}$ is an arbitrary open subset of $\mathbb{S},$ Theorem~\ref{t3} implies that $P_n\Sc P.$ \end{proof}
In some applications, it is more convenient to verify convergence of probabilities for intersections of events than for unions of events. The following lemma links the convergence of probabilities for intersections and unions of events.
\begin{lemma}\label{l:3} Let ${\cal L}=\{B_1,\ldots,B_N\}$ be a finite collection of measurable subsets of $\mathbb{S}.$ Then \[\lim_{n\to\infty} P_n(\cap_{B_i\in {\cal L}'}B_i)\to P(\cap_{B_i\in {\cal L}'}B_i)\] for all the subsets ${\cal L}'\subseteq {\cal L}$ if and only if
\[\lim_{n\to\infty} P_n(\cup_{B_i\in {\cal L}'}B_i)\to P(\cup_{B_i\in {\cal L}'}B_i)\] for all the subsets ${\cal L}'\subseteq {\cal L}$ \end{lemma} \begin{proof} If the convergence holds for intersections, it holds for unions because of the inclusion-exclusion principle. If the convergence holds for unions, it holds for intersections because of the inclusion-exclusion principle and induction in the number of sets in $\cal L$.
\end{proof}
The following two statements follow from Corollary~\ref{c:1} and Theorem~\ref{t:1} respectively.
\begin{corollary}\label{c:2} Let $\{P_n\}_{n=1,2,\ldots}$ be a sequence of probability measures from $\mathbb{P}(\mathbb{S})$ and $P\in\mathbb{P}(\mathbb{S})$. If for a each open subset $\mathcal{O}$ of $\mathbb{S}$ there is a sequence of measurable subsets $B_1,B_2,\ldots$ of $\mathcal{O}$ such that:
(i) $\mathcal{O}=\cup_{i=1}^\infty B_j,$
(ii) $\lim_{n\to\infty}P_n(\cap_{j=1}^k B_{i_j})= P(\cap_{j=1}^k B_{i_j})$ for all $\{B_{i_1},B_{i_2},\ldots,B_{i_k}\}\subseteq\{B_1,B_2,\ldots\},$ $k=1,2,\ldots,$
then $P_n\Wc P$.
\end{corollary} \begin{proof} In view of Lemma~\ref{l:3}, for each open subset $\mathcal{O}$ of $\mathbb{S}$ condition (ii) implies
that $\lim_{n\to\infty}P_n(\cup_{j=1}^k B_j)= P(\cup_{j=1}^k B_j)$ for all $k=1,2,\ldots,$ and according to Corollary~\ref{c:1} these equalities imply that $P_n\Wc P$. \end{proof} \begin{corollary}\label{cor:1(1space)} Let $\{P_{n}\}_{n=1,2,\ldots}$ be a sequence of probability measures from $\mathbb{P}(\mathbb{S})$ and $P\in\mathbb{P}(\mathbb{S})$. If the topology on $\mathbb{S}$ has a countable base $\tau_b$ such that $P_n(\mathcal{O})\to P(\mathcal{O})$ for each finite intersection $\mathcal{O}=\cap_{i=1}^ k {\mathcal{O}}_{i}$ with $\mathcal{O}_{i}\in\tau_b,$ $i=1,2,\ldots,k,$
then $P_n\Wc P$. \end{corollary} \begin{proof}
In view of Lemma~\ref{l:3}, $\lim_{n\to\infty}P_n(\mathcal{O}^*)= P(\mathcal{O}^*)$ for each finite union $\mathcal{O}^*=\cup_{i=1}^k {\mathcal{O}}_{i}$ with $\mathcal{O}_{i}\in\tau_b,$ $k=1,2,\ldots\ .$ Theorem~\ref{t:1} implies that $P_n\Wc P$. \end{proof}
The following example demonstrates that the assumptions of Corollary~\ref{cor:1(1space)} does not imply that $P_n\Sc P$. \begin{example}\label{ex2}{\rm Let $\mathbb{S}=\mathbb{R}^1$, $P$ be a deterministic measure concentrated at the point $a=\sqrt 2,$ and $P_n$ be deterministic measures concentrated at the points $a_n={\sqrt 2}+n^{-1},$ $n=1,2,\ldots \ .$ Since $a_n\to a,$ then $P_n\Wc P$ as $n\to \infty.$
Let $\tau_B$ be the family consisting of an empty set, $\mathbb{R}^1,$ and of all the open intervals on $\mathbb{R}^1$ with rational ends. Then $\tau_b$ is a countable base of the topology on $\mathbb{R}^1$ generated by the Euclidean metric. Observe that $\mathcal{O}_1\cap\mathcal{O}_2\in\tau_b$ for all $\mathcal{O}_1,\mathcal{O}_2\in\tau_b$, and $\lim_{n\to\infty} P_n((b_1,b_2))={\bf I}\{a\in (b_1,b_2)\}=P((b_1,b_2))$, for any rational $b_1<b_2$. Thus the assumptions of Corollary~\ref{cor:1(1space)} hold. However, of course, it is not true that $P_n\Sc P,$ because $P_n(\{a\} )=0$ for all $n=1,2,\ldots,$ but $P(\{a\})=1.$}
$\Box$ \end{example}
\begin{corollary}\label{c:3} Let $\{P_{n}\}_{n=1,2,\ldots}$ be a sequence of probability measures from $\mathbb{P}(\mathbb{S})$ and $P\in\mathbb{P}(\mathbb{S})$. If the topology on $\mathbb{S}$ has a countable base $\tau_b$ such that $P_n(\mathcal{O})\to P(\mathcal{O})$ for each finite intersection $\mathcal{O}=\cap_{i=1}^ k {\mathcal{O}}_{i}$ with $\mathcal{O}_{i}\in\tau_b,$ $i=1,2,\ldots,k,$ and, in addition, for any close set $C\subseteq\mathbb{S}$ there is a sequence of measurable subsets $B_1,B_2,\ldots$ of $C$ such that $C=\cup_{i=1}^\infty B_j$ and condition (ii) of Corollary~\ref{c:2} holds,
then $P_n\Sc P$. \end{corollary} \begin{proof} Let $\mathcal{O}$ be an arbitrary open subset. In view of Corollary~\ref{cor:1(1space)}, the properties of the base $\tau_b$ imply that $P_n\Wc P$. Therefore \begin{equation}\label{eq3.2nn} \ilim_{n\to\infty} P_n(\mathcal{O})\ge P(\mathcal{O}).\end{equation} Let $C=\mathcal{O}^c.$ Condition (ii) of Corollary~\ref{c:2} and Lemma~\ref{l:3} imply that $\lim_{n\to\infty}P_n(\cup_{j=1}^k B_j)=P(\cup_{j=1}^k B_j)$ for all $k=1,2,\ldots.$ In view of Lemma~\ref{l:1}, \begin{equation}\label{eq3.3nn} \ilim_{n\to\infty} P_n(\mathcal{O}^c)\ge P(\mathcal{O}^c).\end{equation} Inequalities (\ref{eq3.2nn}) and (\ref{eq3.3nn}) imply that $\lim_{n\to\infty} P_n(\mathcal{O})= P(\mathcal{O}).$ Since $\mathcal{O}$ is an arbitrary open subset of $\mathbb{S}$, Theorem~\ref{t3} implies that $P_n\Sc P$. \end{proof}
\section{Continuity of Transition Probabilities}\label{3A}
For a Borel subset $S$ of a metric space $(\mathbb{S},\rho)$, where $\rho$ is a metric, consider the metric space $(S,\rho)$. A set $B$ is called open (closed, compact) in $S$ if $B\subseteq S$ and $B$ is open (closed, compact) in $(S,\rho)$. Of course, if $S=\mathbb{S}$, we omit ``in $\mathbb{S}$''. Observe that, in general, an open (closed, compact) set in $S$ may not be open (closed, compact). Open sets in $S$ form the topology on $S$ defined by the restriction of metric $\rho$ on $S$.
For metric spaces $\mathbb{S}_1$ and $\mathbb{S}_2$, a (Borel-measurable) \textit{stochastic kernel} (sometimes called transition probability) $R(ds_1|s_2)$ on $\mathbb{S}_1$ given $\mathbb{S}_2$ is a mapping $R(\,\cdot\,|\,\cdot\,):\mathcal{B}(\mathbb{S}_1)\times \mathbb{S}_2\to [0,1]$, such that $R(\,\cdot\,|s_2)$ is a probability measure on $\mathbb{S}_1$ for any $s_2\in \mathbb{S}_2$, and $R(B|\,\cdot\,)$ is a Borel-measurable function on $\mathbb{S}_2$ for any Borel set $B\in\mathcal{B}(\mathbb{S}_1)$. A stochastic kernel $R(ds_1|s_2)$ on $\mathbb{S}_1$ given $\mathbb{S}_2$ defines a Borel measurable mapping
$s_2\to R(\,\cdot\,|s_2)$ of $\mathbb{S}_2$ to the metric space $\mathbb{P}(\mathbb{S}_1)$ endowed with the topology of weak convergence. A stochastic kernel
$R(ds_1|s_2)$ on $\mathbb{S}_1$ given $\mathbb{S}_2$ is called
\textit{weakly continuous (setwise continuous, continuous in the total variation)}, if $R(\,\cdot\,|s^{(n)})$ converges weakly (setwise, in
the total variation) to $R(\,\cdot\,|s)$ whenever $s^{(n)}$ converges to $s$ in $\mathbb{S}_2$.
In the rest of this section, $\mathbb{S}_1$, $\mathbb{S}_2$ and $\mathbb{S}_3$ are Borel subsets of Polish (complete separable metric) spaces, and $P$ is a stochastic kernel on $\mathbb{S}_1\times\mathbb{S}_2$ given $\mathbb{S}_3$. The following statement follows from Corollary~\ref{cor:1(1space)}. As follows from Lemma~\ref{l:3}, the continuity of finite intersection in the condition of Corollary~\ref{teor:2} can be replaced with the assumption that probabilities of finite unions are continuous.
\begin{corollary}\label{teor:2} If the topology on $\mathbb{S}_i$, $i=1,2$, has a countable base $\tau_b^i$
such that $P(\mathcal{O}_1\times\mathcal{O}_2|\,\cdot\,)$ is continuous on $\mathbb{S}_3$ for each finite intersections $\mathcal{O}_i=\cap_{j=1}^ N {\mathcal{O}}^{j}_i$ with $\mathcal{O}^{j}_i\in\tau_b^i,$ $j=1,2,\ldots,N,$ $i=1,2$, then the stochastic kernel $P$ on $\mathbb{S}_1\times\mathbb{S}_2$ given $\mathbb{S}_3$ is weakly continuous. \end{corollary}
\begin{proof}
Let $\tau_b^{1,2}:=\{\mathcal{O}'_1\times\mathcal{O}'_2:\, \mathcal{O}'_i\in\tau_b^i,\ i=1,2\}$. Note that $\tau_b^{1,2}$ is a countable base of the topology on $\mathbb{S}_1\times \mathbb{S}_2$ defined as the product of the topologies on $\mathbb{S}_1$ and $\mathbb{S}_2.$ Observe that $\cap_{j=1}^N \left(\mathcal{O}_1^j\times \mathcal{O}_2^j\right)= \left(\cap_{j=1}^N\mathcal{O}_1^j\right)\times \left(\cap_{j=1}^N\mathcal{O}_2^j\right)$ for any finite tuples of open sets $\{\mathcal{O}_i^j\}_{j=1}^{N}$ from $\tau_b^i,$ $i=1,2.$
Denote $\mathcal{O}_i=\cap_{j=1}^N\mathcal{O}_i^j$ for $i=1,2.$
By the assumption of Corollary~\ref{teor:2}, $P_n(\mathcal{O}_1\times\mathcal{O}_2|\cdot)$ is continuous on $\mathbb{S}_3.$ This means that the assumption of Corollary~\ref{cor:1(1space)} holds for the base $\tau_b^{1,2}.$ Corollary~\ref{cor:1(1space)} implies that the stochastic kernel $P$ on $\mathbb{S}_1\times\mathbb{S}_2$ given $\mathbb{S}_3$ is weakly continuous.
\end{proof}
Let $\mathbb{F}(\mathbb{S})$ and $\mathbb{C}(\mathbb{S})$ be respectively the spaces of all real-valued functions and all bounded continuous functions defined on the metric space $\mathbb{S}$. A subset $\mathcal{A}_0\subseteq \mathbb{F}(\mathbb{S})$ is said to be \textit{equicontinuous at a point $s\in\mathbb{S}$}, if $
\sup\limits_{f\in\mathcal{A}_0}|f(s')-f(s)|\to 0$ as $s'\to s. $ If a family $\mathcal{A}_0\subseteq \mathbb{F}(\mathbb{S})$ is equicontinuous at each point $s\in\mathbb{S},$
it is called equicontinuous on $\mathbb{S}.$ A subset $\mathcal{A}_0\subseteq \mathbb{F}(\mathbb{S})$ is said to be
\textit{uniformly bounded}, if there exists a constant $M<+\infty $ such that $ |f(s)|\le M$ for all $s\in\mathbb{S}$ and for all $f\in\mathcal{A}_0. $ Obviously, if a subset $\mathcal{A}_0\subseteq \mathbb{F}(\mathbb{S})$ is equicontinuous at all the points $s\in\mathbb{S}$ and uniformly bounded, then $\mathcal{A}_0\subseteq \mathbb{C}(\mathbb{S}).$
\begin{theorem}\label{kern}{\rm(Feinberg et al.
\cite[Theorem~5.2]{FKZ}).} Let $\mathbb{S}_1$, $\mathbb{S}_2$, and $\mathbb{S}_3$ be arbitrary metric spaces, $P(ds_2|s_1)$ be a weakly continuous stochastic kernel on $\mathbb{S}_2$ given $\mathbb{S}_1$, and a subset $\mathcal{A}_0\subseteq \mathbb{C}(\mathbb{S}_2\times\mathbb{S}_3)$ be equicontinuous at all the points $(s_2,s_3)\in\mathbb{S}_2\times\mathbb{S}_3$ and uniformly bounded. If $\mathbb{S}_2$ is separable, then for every open set $\mathcal{O}$ in $\mathbb{S}_2$ the family of functions defined on $\mathbb{S}_1\times\mathbb{S}_3$, \[
\mathcal{A}_\mathcal{O}=\left\{(s_1,s_3)\to\int_{\mathcal{O}}f(s_2,s_3)P(ds_2|s_1)\,:\, f\in\mathcal{A}_0\right\}, \] is equicontinuous at all the points $(s_1,s_3)\in\mathbb{S}_1\times\mathbb{S}_3$ and uniformly bounded. \end{theorem}
Further as $\tau(\mathbb{S})$ we denote the family of all open subsets of a metric space $\mathbb{S}$. For each $B\in{\cal B}(\mathbb{S}_1)$ consider a family of functions
\[\mathcal{P}_B=\{ s_3\to P(B\times C|s_3):\, C\in \tau(\mathbb{S}_2)\}\] mapping $\mathbb{S}_3$ into $[0,1]$.
\begin{lemma}\label{lem:PB} Let $B\in{\cal B}(\mathbb{S}_1)$. The family of functions $\mathcal{P}_B$ is equicontinuous at a point $s_3 \in \mathbb{S}_3$ if and only if \begin{equation} \label{eq:EC}
\sup_{C \in \mathcal{B}(\mathbb{S}_2)} | P(B \times C | s_3^{(n)}) - P(B \times C | s_3)| \to 0 \qquad \mbox{ as } \qquad s_3^{(n)} \to s_3. \end{equation} \end{lemma} \begin{proof} According to the definition of the equicontinuity of the family of functions $\mathcal{P}_B$ at a point, it is sufficient to prove that (\ref{eq:EC}) follows from \[
\sup_{C \in \tau(\mathbb{S}_2)} | P(B \times C | s_3^{(n)}) - P(B \times C | s_3)|\to 0 \qquad \mbox{ as } \qquad s_3^{(n)} \to s_3. \]
Indeed, if $P(B \times \mathbb{S}_2 | s_3)=0$, then $\sup_{C \in \mathcal{B}(\mathbb{S}_2)} | P(B \times C | s_3^{(n)}) - P(B \times C | s_3)|= P(B \times \mathbb{S}_2 | s_3^{(n)})\to P(B \times \mathbb{S}_2 | s_3)=0$ as $s_3^{(n)} \to s_3$, because $\mathbb{S}_2\in\tau(\mathbb{S}_2)$. Otherwise, when $P(B \times \mathbb{S}_2 | s_3)>0$, according to the convergence $P(B \times \mathbb{S}_2 | s_3^{(n)})\to P(B \times \mathbb{S}_2 | s_3)>0$ as $s_3^{(n)} \to s_3$, Theorem~\ref{t5}(ii) applied to the probability measures $C\to P(B \times C | s_3^{(n)})/ P(B \times \mathbb{S}_2 | s_3^{(n)})$ and $C\to P(B \times C | s_3)/ P(B \times \mathbb{S}_2 | s_3)$ from $\mathbb{P}(\mathbb{S}_2)$, where $n$ is rather large, yields that (\ref{eq:EC}) holds, that is, the family of functions $\mathcal{P}_B$ is equicontinuous at a point $s_3 \in \mathbb{S}_3$. \end{proof}
Let $P'$ be the marginal of $P$ on $\mathbb{S}_2$, that is,
$P'(C|s_3):=P(\mathbb{S}_1\times C|s_3)$, $C\in \mathcal{B}(\mathbb{S}_2)$, $s_3\in \mathbb{S}_3$. There exists a stochastic kernel $H$ on $\mathbb{S}_1$ given $\mathbb{S}_2\times\mathbb{S}_3$ such that, for all $B\in \mathcal{B}(\mathbb{S}_1), C\in \mathcal{B}(\mathbb{S}_2),s_3\in \mathbb{S}_3$ \begin{equation} \label{eq:H}
P(B\times C|s_3)=\int_{C}H(B|s_2,s_3)P'(ds_2|s_3); \end{equation}
Bertsekas and Shreve~\cite[Proposition~7.27]{BS}. Moreover, for each $s_3 \in \mathbb{S}_3$, the distribution $H(\,\cdot\, | s_2, s_3)$ is $P'(\,\cdot\,|s_3)$-a.s.\, unique in $s_2$, that is, if $H_1$ and $H_2$ satisfy \eqref{eq:H} then $P'(C^*| s_3) = 0$, where $C^* := \{ s_2 \in \mathbb{S}_2 : H_1(B|s_2,s_3) \not = H_2(B|s_2,s_3) \mbox{ for some } B \in \mathcal{B}(\mathbb{S}_1)\}$; Bertsekas and Shreve~\cite[Corollary~7.27.1]{BS}.
\begin{theorem} \label{mainthkern} Let the topology on $\mathbb{S}_1$ have a countable base $\tau_b $ satisfying the following two conditions: \begin{itemize} \item[(i)] $\mathbb{S}_1\in \tau_b,$ \item[(ii)] for each finite intersection $\mathcal{O} = \bigcap_{i = 1}^N \mathcal{O}^{i}$ of sets $\mathcal{O}^{i} \in \tau_b$, $i = 1,2,\ldots, N$, the family of functions $\mathcal{P}_{\mathcal{O}}$ is equicontinuous at a point $s_s\in \mathbb{S}_3$. \end{itemize} Then, for any sequence $\{s_3^{(n)}\}_{n = 1,2,\ldots}$ from $\mathbb{S}_3$ converging to $s_3\in\mathbb{S}_3$, there exists a subsequence $\{n_k\}_{k=1,2,\ldots}$ and a set $C^* \in \mathcal{B}(\mathbb{S}_2)$ such that \begin{equation} \label{result}
P'(C^* | s_3) = 1 \mbox{ and }\, H(\,\cdot\, | s_2, s_3^{(n_k)} ) \mbox{ converges weakly to } H(\,\cdot\, | s_2, s_3) \mbox{ for all } s_2 \in C^*\ as\ k\to\infty. \end{equation} \end{theorem}
\begin{remark}\label{rem:1} According to Lemma~\ref{l:3}, a countable base $\tau_b$ in Theorem~\ref{mainthkern} can be assumed to be closed with respect to the finite unions instead of finite intersections. \end{remark}
Theorem~\ref{mainthkern} implies the following two corollaries. The proof of Theorem~\ref{mainthkern} is provided after the proof of Lemma~\ref{b1b2}.
\begin{corollary} \label{Cormainkern} If for each open subset $\mathcal{O}$ of $\mathbb{S}_1$ the family of functions $\mathcal{P}_{\mathcal{O}}$ is equicontinuous at a point $s_3\in \mathbb{S}_3$, then for any sequence $\{s_3^{(n)}\}_{n = 1,2,\ldots}$ from $\mathbb{S}_3,$ that converges to $s_3\in \mathbb{S}_3$, there exists a subsequence $\{n_k\}_{k=1,2,\ldots}$ and a set $C^* \in \mathcal{B}(\mathbb{S}_2)$ such that \eqref{result} holds. \end{corollary} \begin{proof} The statement of the corollary follows immediately from Theorem~\ref{mainthkern}. Indeed, the family of functions $\mathcal{P}_{\mathcal{O}}$ is equicontinuous on $ \mathbb{S}_3$ for each open set $\mathcal{O}$ of $\mathbb{S}_1.$ Since $\mathbb{S}_1$ is a separable metric space, each countable base of the topology on $\mathbb{S}_1$ satisfies assumptions of Theorem~\ref{mainthkern}. \end{proof}
Observe that for a stochastic kernel $P$ on $\mathbb{S}_1\times\mathbb{S}_2$ given $\mathbb{S}_3,$ equicontinuity at a point $s_3\in \mathbb{S}_3$ of the family of functions $\mathcal{P}_\mathcal{O}$ for all open subsets $\mathcal{O}$ in $\mathbb{S}_1$ is a weaker assumption than continuity in the total variation of $P$ on $\mathbb{S}_1\times\mathbb{S}_2$ given $\mathbb{S}_3$ at the point $s_3.$ Equicontinuiuty of the family of functions $\mathcal{P}_{\mathbb{S}_1}$ at a point $s_3\in\mathbb{S}_3$ is equivalent to the continuity in the total variation of the stochastic kernel $P^\prime$ on $\mathbb{S}_2$ given $\mathbb{S}_3$ at the point $s_3.$
\begin{corollary}\label{cor:1} Let assumptions of Theorem~\ref{mainthkern} hold. If the setwise convergence takes place in \eqref{result} instead of the weak convergence, then the stochastic kernel $P$ on $\mathbb{S}_1\times\mathbb{S}_2$ given $\mathbb{S}_3$ is setwise continuous. \end{corollary} \begin{proof} According to Theorem~\ref{t3}, if the stochastic kernel $P$ on $\mathbb{S}_1\times\mathbb{S}_2$ given $\mathbb{S}_3$ is not setwise continuous, then there exist $\varepsilon>0$, a nonempty open subset $\mathcal{O}$ of $\mathbb{S}_1\times\mathbb{S}_2$, and a sequence $\{s_3^{(n)}\}_{n=1,2,\ldots}$ that converges to some $s_3\in\mathbb{S}_3$ such that \begin{equation}\label{AB}
|P(\mathcal{O}|s_3^{(n)})-P(\mathcal{O}|s_3)|\ge \varepsilon\mbox{ for each } n=1,2,\ldots\ . \end{equation}
Let $\mathcal{O}_2$ be the projection of $\mathcal{O}$ on $\mathbb{S}_2$ and $\mathcal{O}_{(s_2)}:=\{s_1\in \mathbb{S}_1\,:\, (s_1,s_2)\in \mathcal{O}\}$ be the cut of $\mathcal{O}$ at $s_2\in \mathcal{O}_2$. Since $\mathcal{O}$ is an open set, the sets $\mathcal{O}_2$ and $\mathcal{O}_{(s_2)}$ are open. Since $P'(ds_2|s_3^{(n)})$ converges in the total variation to $P'(ds_2|s_3),$ for any $s_3\in \mathbb{S}_3$ \begin{equation}\label{eq:A}
\left|\int_{\mathcal{O}_2}H(\mathcal{O}_{(s_2)}|s_2,s_3^{(n)})P'(ds_2|s_3^{(n)})-\int_{\mathcal{O}_2}H(\mathcal{O}_{(s_2)}|s_2,s_3^{(n)})P'(ds_2|s_3)\right|\to 0 \mbox{ as }n\to \infty. \end{equation}
According to the assumptions of Corollary~\ref{cor:1}, there exists a set $C^* \in \mathcal{B}(\mathbb{S}_2)$ and a subsequence $\{s_3^{(n_k)}\}_{k=1,2,\ldots}$ of $\{s_3^{(n)}\}_{n=1,2,\ldots}$ such that $P'(C^* | s_3)=1$ and $H(\,\cdot\, | s_2, s_3^{(n_k)})$ converges setwise to $H(\,\cdot\, | s_2, s_3)$ for any $s_2 \in C^*$. In particular, $H(\mathcal{O}_{(s_2)}|s_2,s_3^{(n_k)})\to H(\mathcal{O}_{(s_2)}|s_2,s_3)$ for any $s_2\in C^*$. Therefore, the dominated convergence theorem yields \begin{equation}\label{eq:B}
\int_{\mathcal{O}_2}\left|H(\mathcal{O}_{(s_2)}|s_2,s_3^{(n_k)})-H(\mathcal{O}_{(s_2)}|s_2,s_3)\right|P'(ds_2|s_3)\to 0 \mbox{ as }k\to \infty. \end{equation} Formulae \eqref{eq:A} and \eqref{eq:B} imply that as $k\to\infty$ \[
P(\mathcal{O}|s_3^{(n_k)})=\int_{\mathcal{O}_2}H(\mathcal{O}_{(s_2)}|s_2,s_3^{(n_k)})P'(ds_2|s_3^{(n)})\to \int_{\mathcal{O}_2}H(\mathcal{O}_{(s_2)}|s_2,s_3)P'(ds_2|s_3)=P(\mathcal{O}|s_3). \] This contradicts (\ref{AB}). Thus the stochastic kernel $P$ on $\mathbb{S}_1\times\mathbb{S}_2$ given $\mathbb{S}_3$ is setwise continuous. \end{proof}
The proof of Theorem~\ref{mainthkern} uses several auxiliary results.
\begin{lemma}{\rm(Feinberg et.\,al~\cite[Theorem~5.5]{FKZ}).} \label{setwise} Let $h$ and $\{h^{(n)}\}_{n = 1,2,\ldots}$ be Borel-measurable uniformly bounded real-valued functions defined on a metric space $\mathbb{S}$ and let $\{\mu^{(n)}\}_{n = 1,2,\ldots}$ be a sequence of probability measures from $\mathbb{P}(\mathbb{S})$ that converge in the total variation to the measure $\mu\in\mathbb{P}(\mathbb{S})$. If \begin{equation}\label{sw1}
\sup_{C\in\mathcal{B}(\mathbb{S})}\left|\int_{C}h^{(n)}(s)\mu^{(n)}(ds)-
\int_{C}h(s)\mu(ds)\right|\to 0 \quad {\rm as}\quad n\to\infty, \end{equation} then $\{h^{(n)}\}_{n = 1,2,\ldots}$ converges in probability $\mu $ to $h$ as $n\to\infty$, and therefore there is a subsequence $\{n_k\}_{k = 1,2,\ldots}$ such that $\{h^{(n_k)}\}_{k = 1,2,\ldots}$ converges $\mu$-almost surely to $h$. \end{lemma}
Let $\mathbb{A}_1$ be the family of all subsets of $\mathbb{S}_1$ that are finite unions of sets from the countable base $\tau_b$ of the topology on $\mathbb{S}_1$ satisfying the conditions of
Theorem~\ref{mainthkern}, and $\mathbb{A}_2$ be the family of all subsets $B$ of $\mathbb{S}_1$ such that $B = \tilde{\mathcal{O}} \setminus \mathcal{O}'$ with $\tilde{\mathcal{O}} \in \tau_b$ and $\mathcal{O}' \in \mathbb{A}_1$.
\begin{lemma} \label{b1b2} Let the assumptions of Theorem~\ref{mainthkern} hold for some $s_3 \in \mathbb{S}_3$. Then, for any subset $B \in \mathbb{A}_2$, the family of functions $\mathcal{P}_{B}$ is equicontinuous at the point $s_3 \in \mathbb{S}_3$. \end{lemma} \begin{proof} Fix an arbitrary $s_3\in\mathbb{S}_3.$ Observe that, if for all $\mathcal{O} \in \mathbb{A}_1$ the family of functions $\mathcal{P}_{\mathcal{O}}$ is equicontinuous at the point $s_3 \in \mathbb{S}_3$, then for any subset $B = \tilde{\mathcal{O}} \setminus \mathcal{O}'$ of $\mathbb{S}_1$ with $\tilde{\mathcal{O}} \in \tau_b$ and $\mathcal{O}' \in \mathbb{A}_1$, the family of functions $\mathcal{P}_{B}$ is equicontinuous at the point $s_3 \in \mathbb{S}_3$. Indeed, according to Lemma~\ref{lem:PB}, for all $s_3, s_3^{(n)} \in \mathbb{S}_3$, $n = 1,2,\ldots,$ such that $s_3^{(n)} \to s_3$ as $n \to \infty$, \begin{gather*}
\sup_{C \in \mathcal{B}(\mathbb{S}_2)}|P(B \times C | s_3^{(n)}) - P(B \times C | s_3)| = \sup_{C \in \mathcal{B}(\mathbb{S}_1)}|P((\tilde{\mathcal{O}}\setminus \mathcal{O}')\times C | s_3^{(n)}) - P( (\tilde{\mathcal{O}}\setminus \mathcal{O}')\times C | s_3)|\\
\le \sup_{C \in \mathcal{B}(\mathbb{S}_2)}|P(\mathcal{O}' \times C | s_3^{(n)}) - P( \mathcal{O}' \times C | s_3)| + \sup_{C \in \mathcal{B}(\mathbb{S}_2)}|P(( \tilde{\mathcal{O}} \cup \mathcal{O}') \times C | s_3^{(n)}) - P((\tilde{\mathcal{O}} \cup \mathcal{O}') \times C | s_3)|. \end{gather*} The above inequality, the assumption that \eqref{eq:EC} holds for all $\mathcal{O} \in \mathbb{A}_1$ and for all $s_3, s_3^{(n)} \in \mathbb{S}_3$, $n = 1,2,\ldots$, such that $s_3^{(n)} \to s_3$ as $n \to \infty$, and the property that if $\mathcal{O}' \in \mathbb{A}_1$ then $\tilde{\mathcal{O}} \cup \mathcal{O}' \in \mathbb{A}_1$ for all $\tilde{\mathcal{O}} \in \tau_b$ imply that \eqref{eq:EC} holds for any subset $B \in \mathbb{A}_2$,
that is, the family of functions $\mathcal{P}_{B}$ is equicontinuous at the point $s_3 \in \mathbb{S}_3$. The rest of the proof establishes that, for each $\mathcal{O} \in \mathbb{A}_1$, the family of functions $\mathcal{P}_{\mathcal{O}}$ is equicontinuous at the point $s_3 \in \mathbb{S}_3$.
Let $\tau_b = \{\mathcal{O}^{(j)}\}_{j = 1,2,\ldots}$. Consider an arbitrary $\mathcal{O} \in \mathbb{A}_1$. Then $\mathcal{O} = \cup_{i = 1}^{N} \mathcal{O}^{(j_i)}$ for some $N = 1,2,\ldots$, where $\mathcal{O}^{(j_i)} \in \tau_b$, $i = 1,2,\ldots, N$. Let $\mathbb{A}^N = \{\cap_{m = 1}^k \mathcal{O}^{(i_m)}: \{i_1, i_2, \ldots, i_k\} \subseteq \{j_1, j_2, \ldots j_N\}\}$ be the finite set of possible intersections of $\mathcal{O}^{(j_1)}, \ldots, \mathcal{O}^{(j_N)}$. The principle of inclusion-exclusion implies that for $\mathcal{O} = \cup_{i = 1}^{N} \mathcal{O}^{(j_i)}$, $C\in \mathbb{S}_2$, and $s_3, s_3^{(n)} \in \mathbb{S}_3$,
\[|P(\mathcal{O}\times C | s_3) - P(\mathcal{O}\times C | s_3^{(n)})| \le \sum_{D \in \mathbb{A}^N} |P( D\times C | s_3) - P( D \times C| s_3^{(n)})|.\] The above inequality and the assumption of Theorem~\ref{mainthkern} regarding finite intersections of the elements of the base $\tau_b$ imply that, for each $\mathcal{O} \in \mathbb{A}_1$, the family of functions $\mathcal{P}_{\mathcal{O}}$ is equicontinuous at the point $s_3 \in \mathbb{S}_3$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{mainthkern}] Let $\{s_3^{(n)}\}_{n = 1,2,\ldots}$ be a sequence from $\mathbb{S}_3$ that converges to $s_3\in \mathbb{S}_3$. According to Theorem~\ref{t1}, \eqref{result} holds if there exists a subsequence $\{n_m\}_{m= 1,2,\ldots}$ and a set $C^* \in \mathcal{B}(\mathbb{S}_2)$ such that for all open subsets $\mathcal{O}$ in $\mathbb{S}_1$ \begin{equation} \label{special}
P'(C^* | s_3) = 1 \quad \mbox{ and } \quad \ilim\limits_{m\to\infty} H( \mathcal{O} \,|\, s_2, s_3^{(n_m)} ) \ge H(\mathcal{O} \,|\, s_2, s_3) \quad \mbox{ for all } \quad s_2 \in C^*. \end{equation}
The rest of the proof establishes the existence of a subsequence $\{s_3^{(n_m)}\}_{m= 1,2,\ldots}$ of the sequence $\{s_3^{(n)}\}_{n = 1,2,\ldots}$ and a set $C^* \in \mathcal{B}(\mathbb{S}_2)$ such that \eqref{special} holds for each open subset $\mathcal{O}$ of $\mathbb{S}_1$.
Let $\mathbb{A}_1$ and $\mathbb{A}_2$ be the families of subsets of $\mathbb{S}_1$ as defined before Lemma~\ref{b1b2}. Observe that: (i) both $\mathbb{A}_1$ and $\mathbb{A}_2$ are countable, (ii) every open subset $\mathcal{O}$ of $\mathbb{S}_1$ can be represented as \begin{equation} \label{partition} \mathcal{O} = \bigcup_{j = 1,2,\ldots} \mathcal{O}^{(j,1)} = \bigcup_{j = 1,2,\ldots} B^{(j,1)}, \quad \mbox{ for some } \quad \mathcal{O}^{(j,1)} \in \tau_b, j = 1,2,\ldots, \end{equation} where $B^{(j,1)} = \mathcal{O}^{(j,1)} \setminus (\cup_{i = 1}^{j-1} \mathcal{O}^{(i,1)})$ are disjoint elements of $\mathbb{A}_2$ (it is allowed that $\mathcal{O}^{(j,1)} = \emptyset$ or $B^{(j,1)} = \emptyset$ for some $j = 1,2,\ldots$).
To prove \eqref{special} for all open subsets $\mathcal{O}$ of $\mathbb{S}_1$, we first show that \eqref{special} holds for all $\mathcal{O} \in \mathbb{A}_2$.
From Lemmas~\ref{lem:PB}, \ref{b1b2} and \eqref{eq:H}, \begin{equation} \label{ECO}
\lim_{n \to \infty}\sup_{C\in\mathcal{B}(\mathbb{S}_2)}\left|\int_{C} H(B|s_2, s_3^{(n)}) P'(ds_2|s_3^{(n)})- \int_{C}H(B|s_2, s_3) P'(ds_2|s_3)\right| = 0, \quad B \in \mathbb{A}_2. \end{equation}
Since the set $\mathbb{A}_2$ is countable, let $\mathbb{A}_2 := \{B^{(j)}: j = 1,2,\ldots\}$. Choose a subsequence $\{s_3^{(n_k)}\}_{k= 1,2,\ldots}$ of the sequence $\{s_3^{(n)}\}_{n = 1,2,\ldots}$. Denote $s^{(n,0)}=s_3^{(n)}$ for all $n=1,2,\ldots\ .$ For $j = 1,2,\ldots$, from \eqref{ECO}, Lemma~\ref{setwise}, applied with $s = s_2$, $h^{(n)}(s) = H(B^{(j)} |s_2, s^{(n, j-1)})$, $\mu^{(n)}(\cdot) = P'(\,\cdot\,| s^{(n, j-1)})$, $h(s) = H(B^{(j)} |s_2, s_3)$, and $\mu(\cdot) = P'(\,\cdot\,| s_3)$, there exists a subsequence $\{s^{(n, j)}\}_{n = 1,2,\ldots}$ of the sequence $\{s^{(n, j-1)}\}_{n = 1,2,\ldots}$ and a set $C^*_j\in \mathcal{B}(\mathbb{S}_2)$ such that \begin{equation} \label{B-i}
\lim_{n \to \infty}H( B^{(j)} | s_2, s^{(n, j)}) = H(B^{(j)}|s_2, s_3) \quad \mbox{ for all } \quad s_2 \in C_j^*. \end{equation}
Let $C^*=\cap_{j=1,2,\ldots} C_j^*$. Observe that $P'(C^*|s_3)=1$. Let $s_3^{(n_m)}=s^{(m, m)},$ $m=1,2,\ldots\ .$ As follows from Cantor's diagonal argument, \eqref{special} holds with $\mathcal{O}=B^{(j)}$ for all $j = 1, 2, \ldots\ .$ In other words, \eqref{special} is proved for all $\mathcal{O} \in \mathbb{A}_2$.
Let $\mathcal{O}$ be an arbitrary open set in $\mathbb{S}_1$ and $B^{(1,1)}, B^{(2,1)}, \ldots$ be disjoint elements of $\mathbb{A}_2$ satisfying \eqref{partition}. Then the countable additivity of probability measures implies that, for all $s_2 \in C^*$, \begin{multline*} \begin{aligned}
\ilim\limits_{m\to\infty}H(\mathcal{O}|s_2, s_3^{(n_m)}) &=\ilim\limits_{m\to\infty}\sum_{j=1,2,\ldots} H(B^{(j,1)}|s_2, s_3^{(n_m)}) \ge \sum_{j=1,2,\ldots}\ilim\limits_{m\to\infty}H(B^{(j,1)}|s_2, s_3^{(n_m)}) \\
&=\sum_{j=1,2,\ldots} H(B^{(j,1)}|s_2, s_3)=H(\mathcal{O}|s_2, s_3). \end{aligned} \end{multline*} Therefore, \eqref{special} holds for all open subsets $\mathcal{O}$ in $\mathbb{S}_1$. \end{proof}
\begin{example}\label{exa:MDM} (Stochastic kernel $P$ on $\mathbb{S}_1\times\mathbb{S}_2$ given $\mathbb{S}_3$ satisfies assumptions of Theorem~\ref{mainthkern}, but it is not setwise continuous and it does not satisfy the assumption of Corollary~\ref{Cormainkern}.) {\rm Let $\mathbb{S}_1=\mathbb{R}^1$, $\mathbb{S}_2=\{1\}$,
$\mathbb{S}_3=\{1^{-1},2^{-1},\ldots,0\}$, $\tau_B$ be the family consisting of an empty set, $\mathbb{R}^1,$ and of all the open intervals on $\mathbb{R}^1$ with rational ends, and $P(B\times C|s_3)=\mathbf{I}\{\sqrt{2}+s_3\in B\}\mathbf{I}\{1\in C\}$, $B\in\mathcal{B}(\mathbb{S}_1)$, $C\in\mathcal{B}(\mathbb{S}_2)$. Then $P'(C)=\mathbf{I}\{1\in C\}$,
$H(B|s_2,s_3)=\mathbf{I}\{\sqrt{2}+s_3\in B\}$, $B\in\mathcal{B}(\mathbb{S}_1)$, $C\in\mathcal{B}(\mathbb{S}_2)$. Let $\tau_b$ be the countable base of the topology on $\mathbb{R}^1$ generated by the Euclidean metric described in Example~\ref{ex2}. The family $\tau_b$ is closed under finite intersections, and for any $\mathcal{O}\in \tau_b$ the family of functions $\mathcal{P}_\mathcal{O}$ is equicontinuous at all the points $s_3 \in \mathbb{S}_3$. Therefore, assumptions of Theorem~\ref{mainthkern} hold.
Note that the function $P(B\times C|s_3)$ is not continuous at the point $s_3=0,$ when $B=\mathbb{R}^1\setminus\{\sqrt{2}\}$ and $C=\mathbb{S}_3$. Therefore, the family $\mathcal{P}_B$ is not equicontinuous at the point $s_3 =0,$ and the assumption of Corollary~\ref{Cormainkern} do not hold. Moreover, the sequence
$\{H(B|1,\frac1n)\}_{n=1,2,\ldots}$ (and any its subsequence) does not converge to $H(B|1,0)$ and, therefore, the setwise convergence assumption from Corollary~\ref{cor:1} do not hold. }
$\Box$ \end{example}
\section{Partially Observable Markov Decision Processes}\label{S4} Convergence properties of probability measures and relevant continuity properties of transition probabilities are broadly used in mathematical methods of stochastic control. In this section, we describe the results for a Bayesian sequential decision model, a POMDP. For POMDPs, posterior probabilities of states of the process form sufficient statistics; see e.g., Hern\'{a}ndez-Lerma~\cite[p. 89]{HL}. In terms of Markov Decision Processes, this well-known fact means that it is possible to construct an MDP, called a Completely Observable Markov Decision Process (COMDP), whose state space is the space of probability measures on the original state space. If an optimal policy is found for a COMDP, it is easy to compute an optimal policy for the original POMDP. However, except the cases of finite state spaces (Smallwood and Sondik~\cite{SS}, Sondik~\cite{So}), MDMIIs with transition probabilities having densities (Rieder~\cite{Ri}, B\"auerle and Rieder~\cite[Chapter 5]{BR}), models explicitly defined by equations for continuous random variables (Striebel~\cite{St}, Bensoussan~\cite{Be}), and numerous particular problems studied in the literature, until recently very little had been known about the existence and characterizations of optimal policies for POMDPs and their COMDPs. The main difficulty is that the transition probability for a COMDP is defined via the Bayes formula presented in formula (\ref{3.1}) below, and the explicit forms of the Bayes formula are known either for discrete events or for continuous random variables; see Shityaev~\cite[p. 231]{Sh}. Recently Feinberg et al.~\cite{FKZ} established sufficient conditions for the existence of optimal policies and their characterization for POMDPs with Borel state, action, and observation spaces.
In this section we define POMDPs, explain their reduction to COMDPs, survey some of the results from Feinberg et al.~\cite{FKZ}, and present the condition on joint distributions of posterior distributions and observations that implies weak continuity of transition probabilities for the COMDP. In the following section, we describe a more particular model, the MDMII, and apply Corollary~\ref{cor:1} and results of this section to it.
Let $\mathbb{X}$, $\mathbb{Y}$, and $\mathbb{A}$ be Borel subsets of Polish spaces,
$P(dx'|x,a)$ be a stochastic kernel on
$\mathbb{X}$ given $\mathbb{X}\times\mathbb{A}$, $Q(dy| a,x)$ be a stochastic kernel on
$\mathbb{Y}$ given $\mathbb{A}\times\mathbb{X}$, $Q_0(dy|x)$ be a stochastic kernel on $\mathbb{Y}$ given $\mathbb{X}$, $p$ be a probability distribution on $\mathbb{X}$, $c:\mathbb{X}\times\mathbb{A}\to {\bar\mathbb{R}}^1=\mathbb{R}^1\cup\{+\infty\}$ be a bounded below Borel function on $\mathbb{X}\times\mathbb{A}.$
A {\it POMDP} is specified by a tuple $(\mathbb{X},\mathbb{Y},\mathbb{A},P,Q,c)$, where $\mathbb{X}$ is the \textit{state space}, $\mathbb{Y}$ is the \textit{observation set},
$\mathbb{A}$ is the \textit{action} \textit{set}, $P(dx'|x,a)$ is the
\textit{state transition law}, $Q(dy| a,x)$ is the \textit{observation stochastic kernel}, $c:\mathbb{X}\times\mathbb{A}\to {\bar\mathbb{R}}^1$ is the \textit{one-step cost}.
The partially observable Markov decision process evolves as follows: (i) at time $t=0$, the initial unobservable state $x_0$ has a given prior distribution $p$; (ii) the initial observation $y_0$ is generated according to the initial observation stochastic kernel
$Q_0(\,\cdot\,|x_0)$; (iii) at each time epoch $t=0,1,\ldots,$ if the state of the system is $x_t\in\mathbb{X}$ and the decision-maker chooses an action $a_t\in \mathbb{A}$, then the cost $c(x_t,a_t)$ is incurred; (iv) the system moves to a state $x_{t+1}$ according to the transition law $P(\,\cdot\,|x_t,a_t)$, $t=0,1,\ldots$;
(v) an observation $y_{t+1}\in\mathbb{Y}$ is generated by the observation stochastic kernel $Q(\,\cdot\,|a_t,x_{t+1})$, $t=0,1,\ldots\ .$
Define the \textit{observable histories}: $h_0:=(p,y_0)\in \mathbb{H}_0$ and $h_t:=(p,y_0,a_0,\ldots,y_{t-1}, a_{t-1}, y_t)\in\mathbb{H}_t$ for all $t=1,2,\dots$,
where $\mathbb{H}_0:=\mathbb{P}(\mathbb{X})\times \mathbb{Y}$ and $\mathbb{H}_t:=\mathbb{H}_{t-1}\times \mathbb{A}\times \mathbb{Y}$ if $t=1,2,\dots$. A \textit{policy} $\pi$ for the POMDP is defined as a sequence $\pi=\{\pi_t\}_{t=0,1,\ldots}$
of stochastic kernels $\pi_t$ on $\mathbb{A}$ given $\mathbb{H}_t$. A policy $\pi$ is called \textit{nonrandomized}, if each probability measure
$\pi_t(\,\cdot\,|h_t)$ is concentrated at one point. The \textit{set of all policies} is denoted by $\Pi$. The Ionescu Tulcea theorem (Bertsekas and Shreve \cite[pp. 140-141]{BS} or Hern\'andez-Lerma and Lasserre \cite[p.178]{HLerma1}) implies that a policy $\pi\in \Pi$ and an initial distribution $p\in \mathbb{P}(\mathbb{X})$, together with the stochastic kernels $P$, $Q$ and $Q_0$, determine a unique probability measure $P_{p}^\pi$ on the set of all trajectories $ (\mathbb{X}\times\mathbb{Y}\times \mathbb{A})^{\infty}$ endowed with the $\sigma$-field defined by the products of Borel $\sigma$-fields $\mathcal{B}(\mathbb{X})$, $\mathcal{B}(\mathbb{Y})$, and $\mathcal{B}(\mathbb{A})$. The expectation with respect to this probability measure is denoted by $\mathbb{E}_{p}^\pi$.
For a finite horizon $T=0,1,...,$ the \textit{expected total discounted costs} are \begin{equation}\label{eq1} V_{T,\alpha}^{\pi}(p):=\mathbb{E}_p^{\pi}\sum\limits_{t=0}^{T-1}\alpha^tc(x_t,a_t),\qquad\qquad p\in \mathbb{P}(\mathbb{X}),\,\pi\in\Pi, \end{equation} where $\alpha\ge 0$ is the discount factor, $V_{0,\alpha}^{\pi}(p)=0.$ Consider the following assumptions. \vskip 0.9 ex
\noindent\textbf{Assumption (D)}. $c$ is bounded below on $\mathbb{X}\times\mathbb{A}$ and
$\alpha\in (0,1)$.
\noindent\textbf{Assumption (P)}. $c$ is nonnegative on $\mathbb{X}\times\mathbb{A}$ and $\alpha=1$.\vskip 0.9 ex
When $T=\infty,$ formula (\ref{eq1}) defines the \textit{infinite horizon expected total discounted cost}, and we denote it by $V_\alpha^\pi(p).$ For any function $g^{\pi}(p)$, including $g^{\pi}(p)=V_{T,\alpha}^{\pi}(p)$ and $g^{\pi}(p)=V_{\alpha}^{\pi}(p)$, define the \textit{optimal values} \begin{equation*} g(p):=\inf\limits_{\pi\in \Pi}g^{\pi}(p), \qquad \ p\in\mathbb{P}(\mathbb{X}). \end{equation*} A policy $\pi$ is called \textit{optimal} for the respective criterion, if $g^{\pi}(p)=g(p)$ for all $p\in \mathbb{P}(\mathbb{X}).$ For $g^\pi=V_{T,\alpha}^\pi$, the optimal policy is called \emph{$T$-horizon discount-optimal}; for $g^\pi=V_{\alpha}^\pi$, it is called \emph{discount-optimal}.
We recall that a function $c$ defined on $\mathbb{X}\times\mathbb{A}$ with values in ${\bar \mathbb{R}}^1$ is inf-compact if the set $\{(x,a)\in \mathbb{X}\times\mathbb{A}:\, c(x,a)\le \lambda\}$ is compact for any finite number $\lambda.$ A function $c$ defined on $\mathbb{X}\times \mathbb{A}$ with values in ${\bar \mathbb{R}}^1$ is called $\mathbb{K}$-inf-compact on $\mathbb{X}\times\mathbb{A}$, if for any compact set $K\subseteq\mathbb{X}$, the function $c:K\times\mathbb{A}\to {\bar \mathbb{R}}^1$ defined on $K\times\mathbb{A}$ is inf-compact; Feinberg et al.~\cite[Definition 1.1]{FKV, FKN}. According to Feinberg et al.~\cite[Lemma 2.5]{FKN}, a bounded below function $c$ is $\mathbb{K}$-inf-compact on the product of metric spaces $\mathbb{X}$ and $\mathbb{A}$ if and only if it satisfies the following two conditions:
(a) $c$ is lower semi-continuous;
(b) if a sequence $\{x^{(n)} \}_{n=1,2,\ldots}$ with values in $\mathbb{X}$ converges and its limit $x$ belongs to $\mathbb{X}$ then any sequence $\{a^{(n)} \}_{n=1,2,\ldots}$ with $a^{(n)}\in \mathbb{A}$, $n=1,2,\ldots,$ satisfying the condition that the sequence $\{c(x^{(n)},a^{(n)}) \}_{n=1,2,\ldots}$ is bounded above, has a limit point $a\in\mathbb{A}.$
For a POMDP $(\mathbb{X},\mathbb{Y},\mathbb{A},P,Q,c)$, consider the MDP $(\mathbb{X},\mathbb{A},P,c)$, in which all the states are observable. An MDP can be viewed as a particular POMDP with $\mathbb{Y}=\mathbb{X}$ and $Q(B|a,x)=Q(B|x)={\bf I}\{x\in B\}$ for all $x\in\mathbb{X},$ $a\in \mathbb{A}$, and $B\in{\mathcal B}(\mathbb{X})$. In addition, for an MDP an initial state is observable. Thus for an MDP an initial state $x$ is considered instead of the initial distribution $p.$ In fact, this MDP possesses a special property that action sets at all the states are equal.
It is well known that the analysis and optimization of an POMDP can be reduced to the analysis and optimization to a specially constructed MDPs called a COMDP. The states of the COMDP are posterior state distributions of the original POMDP. In order to find an optimal policy for POMDP, it is sufficient to find such a policy for the COMDP, and then it is easy to construct an optimal policy for the COMDPs (see Bertsekas and Shreve \cite[Section 10.3]{BS}, Dynkin and Yushkevich \cite[Chapter 8]{DY}, Hern\'{a}ndez-Lerma \cite[p. 87]{HL}, Yushkevich \cite{Yu} or Rhenius \cite{Rh} for details). However, little is known about the existence of optimal policies for COMDPs and how to find them when the state, observation, and action sets are Borel spaces. The rest of this section presents recent results from Feinberg et al.~\cite{FKZ} on the existence optimal policies and their computation for COMDPs and therefore for POMDPs.
Our next goal is to define the transition probability $q$ for the COMDP presented in (\ref{3.7}). Given a posterior distribution
$z$ of the state $x$ at time epoch $t=0,1,\ldots$ and given an action $a$ selected at epoch $t$, denote by $R(B\times C|z,a) $ the joint probability that the state at time $(t+1)$ belongs to the set $B\in {\mathcal B}(\mathbb{X})$ and the observation at time $t+1$ belongs to the set $C\in {\mathcal B}(\mathbb{Y})$, \begin{equation}\label{3.3}
R(B\times C|z,a):=\int_{\mathbb{X}}\int_{B}Q(C|a,x')P(dx'|x,a)z(dx),\ B\in \mathcal{B}(\mathbb{X}),\ C\in \mathcal{B}(\mathbb{Y}),\ z\in\mathbb{P}(\mathbb{X}),\ a\in \mathbb{A}. \end{equation}
Observe that $R$ is a stochastic kernel on $\mathbb{X}\times\mathbb{Y}$ given ${\mathbb{P}}(\mathbb{X})\times \mathbb{A}$; see Bertsekas and Shreve \cite[Section 10.3]{BS}, Dynkin and Yushkevich \cite[Chapter 8]{DY}, Hern\'{a}ndez-Lerma \cite[p. 87]{HL}, Yushkevich \cite{Yu}, or Rhenius \cite{Rh} for details.
The probability that the observation $y$ at time $t+1$ belongs to the set $C\in\mathcal{B}(\mathbb{Y})$, given that at time $t$ the posterior state probability is $z$ and selected action is $a,$ is
$R'(C|z,a):=R(\mathbb{X}\times C|z,a)$, $C\in \mathcal{B}(\mathbb{Y})$, $z\in\mathbb{P}(\mathbb{X})$, $a\in\mathbb{A}$. Observe that $R'$ is a stochastic kernel on $\mathbb{Y}$ given ${\mathbb{P}}(\mathbb{X})\times \mathbb{A}.$ By Bertsekas and Shreve~\cite[Proposition 7.27]{BS}, there exist a stochastic kernel $H$ on $\mathbb{X}$ given ${\mathbb{P}}(\mathbb{X})\times \mathbb{A}\times\mathbb{Y}$ such that \begin{equation}\label{3.4}
R(B\times C|z,a)=\int_{C}H(B|z,a,y)R'(dy|z,a),\quad B\in \mathcal{B}(\mathbb{X}),\ C\in \mathcal{B}(\mathbb{Y}),\ z\in\mathbb{P}(\mathbb{X}),\ a\in \mathbb{A}. \end{equation}
The stochastic kernel $H(\,\cdot\,|z,a,y)$ defines a measurable mapping $H:\,\mathbb{P}(\mathbb{X})\times \mathbb{A}\times \mathbb{Y} \to\mathbb{P}(\mathbb{X})$, where
$H(z,a,y)(\,\cdot\,)=H(\,\cdot\,|z,a,y).$ For each pair $(z,a)\in \mathbb{P}(\mathbb{X})\times\mathbb{A}$, the mapping $H(z,a,\cdot):\mathbb{Y}\to\mathbb{P}(\mathbb{X})$ is defined
$R'(\,\cdot\,|z,a)$-almost surely uniquely in $y\in\mathbb{Y}$; Bertsekas and Shreve \cite[Corollary~7.27.1]{BS} or Dynkin and Yushkevich \cite[Appendix 4.4]{DY}. For a posterior distribution $z_t\in \mathbb{P}(\mathbb{X})$, action $a_t\in \mathbb{A}$, and an observation $y_{t+1}\in\mathbb{Y},$ the posterior distribution $z_{t+1}\in\mathbb{P}(\mathbb{X})$ is \begin{equation}\label{3.1} z_{t+1}=H(z_t,a_t,y_{t+1}). \end{equation}
However, the observation $y_{t+1}$ is not available in the COMDP model, and therefore $y_{t+1}$ is a random variable with the distribution $R'(\,\cdot\,|z_t,a_t)$, and the right-hand side of (\ref{3.1}) maps $(z_t,a_t)\in \mathbb{P}(\mathbb{X})\times\mathbb{A}$ to $\mathbb{P}(\mathbb{P}(\mathbb{X})).$ Thus, $z_{t+1}$ is a random variable with values in $\mathbb{P}(\mathbb{X})$ whose distribution is defined uniquely by the stochastic kernel
\begin{equation}\label{3.7}
q(D|z,a):=\int_{\mathbb{Y}}\mathbf{I}\{H(z,a,y)\in D\}R'(dy|z,a),\quad D\in \mathcal{B}(\mathbb{P}(\mathbb{X})),\ z\in \mathbb{P}(\mathbb{X}),\ a\in\mathbb{A}; \end{equation} Hern\'andez-Lerma~\cite[p. 87]{HL}. The particular choice of a stochastic kernel $H$ satisfying (\ref{3.4}) does not effect the definition of $q$ from (\ref{3.7}), since for each pair $(z,a)\in \mathbb{P}(\mathbb{X})\times\mathbb{A}$, the mapping $H(z,a,\cdot):\mathbb{Y}\to\mathbb{P}(\mathbb{X})$ is defined
$R'(\,\cdot\,|z,a)$-almost surely uniquely in $y\in\mathbb{Y}$.
The COMDP is defined as an MDP with the parameters ($\mathbb{P}(\mathbb{X})$,$\mathbb{A}$,$q$,$\bar{c}$), where
(i) $\mathbb{P}(\mathbb{X})$ is the state space; (ii) $\mathbb{A}$ is the action set available at all states $z\in\mathbb{P}(\mathbb{X})$; (iii) the
one-step cost function $\bar{c}:\mathbb{P}(\mathbb{X})\times\mathbb{A}\to{\bar \mathbb{R}}^1$, defined \begin{equation}\label{eq:c} \bar{c}(z,a):=\int_{\mathbb{X}}c(x,a)z(dx), \quad z\in\mathbb{P}(\mathbb{X}),\, a\in\mathbb{A}; \end{equation}
(iv) transition probabilities $q$ on $\mathbb{P}(\mathbb{X})$ given $\mathbb{P}(\mathbb{X})\times \mathbb{A}$ defined in (\ref{3.7}).
For an MDP, a nonrandomized policy is called \textit{Markov}, if all decisions depend only on the current state and time. A Markov policy is called \textit{stationary}, if all decisions depend only on current states.
For MDPs, Feinberg et al.~\cite[Theorem 2]{FKN} provides general conditions for the existence of optimal policies, validity of optimality equations, and convergence of value iterations. Here we formulate these conditions for an MDP whose action sets in all states are equal, and then Theorem~\ref{teor4.3} adapts Feinberg et al.~\cite[Theorem 2]{FKN} to POMDPs.
\noindent\textbf{Assumption (${\rm \bf W^*}$)} (cf. Feinberg et al.~\cite{FKZ} and Lemma 2.5 in \cite{FKN}). (i) the function $c$ is $\mathbb{K}$-inf-compact on $\mathbb{X}\times\mathbb{A}$;
(ii) the transition probability $P(\,\cdot\,|x,a)$ is weakly continuous in $(x,a)\in \mathbb{X}\times\mathbb{A}$.
For the COMDP, Assumption \textbf{(${\rm \bf W^*}$)}
has the following form: (i) $\bar{c}$ is $\mathbb{K}$-inf-compact on $\mathbb{P}(\mathbb{X})\times\mathbb{A}$;
(ii) the transition probability $q(\,\cdot\,|z,a)$ is weakly continuous in $(z,a)\in \mathbb{P}(\mathbb{X})\times\mathbb{A}$.
In the following theorem, the notation $\bar v$ is used for the expected total costs for COMDPs instead the symbol $V$ used for POMDPs. The following theorem follows directly from Feinberg et al. \cite[Theorem~2]{FKNMOR} applied to the COMDP $(\mathbb{P}(\mathbb{X}),\mathbb{A},q,\bar{c})$.
\begin{theorem}{\rm (Feinberg et al. \cite[Theorem~3.1]{FKZ}).}
\label{teor4.3} Let either Assumption {\rm{\bf({\bf D})}} or Assumption {\rm\bf({\bf P})} hold. If the COMDP $(\mathbb{P}(\mathbb{X}),\mathbb{A},q,\bar{c})$ satisfies {\rm Assumption \textbf{(${\rm \bf W^*}$)}}, then:
{(i}) the functions ${\bar v}_{t,\alpha}$, $t=0,1,\ldots$, and ${\bar v}_\alpha$ are lower semi-continuous on $\mathbb{P}(\mathbb{X})$, and ${\bar v}_{t,\alpha}(z)\to
{\bar v}_\alpha (z)$ as $t \to \infty$ for all $z\in \mathbb{P}(\mathbb{X});$
{(ii)} for each $z\in \mathbb{P}(\mathbb{X})$ and $t=0,1,...,$ \begin{equation}\label{eq433} \begin{aligned} &\qquad\qquad{\bar v}_{t+1,\alpha}(z)=\min\limits_{a\in
\mathbb{A}}\left\{\bar{c}(z,a)+\alpha \int_{\mathbb{P}(\mathbb{X})} {\bar v}_{t,\alpha}(z')q(dz'|z,a)\right\}=
\\ &\min\limits_{a\in \mathbb{A}}\left\{\int_{\mathbb{X}}c(x,a)z(dx) +\alpha \int_{\mathbb{X}}\int_{\mathbb{X}}\int_{\mathbb{Y}} {\bar v}_{t,\alpha}(H(z,a,y)) Q(dy|a,x')P(dx'|x,a)z(dx) \right\}, \end{aligned} \end{equation} where ${\bar v}_{0,\alpha}(z)=0$ for all $z\in \mathbb{P}(\mathbb{X})$, and the nonempty sets \[
A_{t,\alpha}(z):=\left\{a\in \mathbb{A}:\,{\bar v}_{t+1,\alpha}(z)=\bar{c}(z,a)+\alpha \int_{\mathbb{P}(\mathbb{X})} {\bar v}_{t,\alpha}(z')q(dz'|z,a) \right\},\quad z\in \mathbb{P}(\mathbb{X}),\ t=0,1,\ldots, \] satisfy the following properties: (a) the graph ${\rm Gr}(A_{t,\alpha})=\{(z,a):\, z\in\mathbb{P}(\mathbb{X}), a\in A_{t,\alpha}(z)\}$, $t=0,1,\ldots,$ is a Borel subset of $\mathbb{P}(\mathbb{X})\times \mathbb{A}$, and (b) if ${\bar v}_{t+1,\alpha}(z)=+\infty$, then $A_{t,\alpha}(z)=\mathbb{A}$ and, if ${\bar v}_{t+1,\alpha}(z)<+\infty$, then $A_{t,\alpha}(z)$ is compact;
{(iii)} for each $T=1,2,\ldots$, for the COMDP there exists an optimal Markov $T$-horizon policy $(\phi_0,\ldots,\phi_{T-1})$, and if for a $T$-horizon Markov policy $(\phi_0,\ldots,\phi_{T-1})$ the inclusions $\phi_{T-1-t}(z)\in A_{t,\alpha}(z)$, $z\in\mathbb{P}(\mathbb{X}),$ $t=0,\ldots,T-1,$ hold, then this policy is $T$-horizon optimal;
{(iv)} for each $z\in \mathbb{P}(\mathbb{X})$ \begin{equation}\label{eq5a} \begin{aligned}
&\qquad\qquad{\bar v}_{\alpha}(z)=\min\limits_{a\in
\mathbb{A}}\left\{\bar{c}(z,a)+\alpha\int_{\mathbb{P}(\mathbb{X})} {\bar v}_{\alpha}(z')q(dz'|z,a)\right\}=\\ &\min\limits_{a\in \mathbb{A}}\left\{\int_{\mathbb{X}}c(x,a)z(dx) +\alpha \int_{\mathbb{X}}\int_{\mathbb{X}}\int_{\mathbb{Y}} {\bar v}_{\alpha}(H(z,a,y))
Q(dy|a,x')P(dx'|x,a)z(dx) \right\},\ \end{aligned} \end{equation} and the nonempty sets \[
A_{\alpha}(z):=\left\{a\in \mathbb{A}:\,{\bar v}_{\alpha}(z)=\bar{c}(z,a)+\alpha\int_{\mathbb{P}(\mathbb{X})} {\bar v}_{\alpha}(z')q(dz'|z,a) \right\},\quad z\in \mathbb{P}(\mathbb{X}), \] satisfy the following properties: (a) the graph ${\rm Gr}(A_{\alpha})=\{(z,a):\, z\in\mathbb{P}(\mathbb{X}), a\in \mathbb{A}_\alpha(z)\}$ is a Borel subset of $\mathbb{P}(\mathbb{X})\times \mathbb{A}$, and (b) if ${\bar v}_{\alpha}(z)=+\infty$, then $A_{\alpha}(z)=\mathbb{A}$ and, if ${\bar v}_{\alpha}(z)<+\infty$, then $A_{\alpha}(z)$ is compact.
{(v)} for an infinite horizon problem there exists a stationary discount-optimal policy $\phi_\alpha$ for the COMDP, and a stationary policy $\phi_\alpha^{*}$ for the COMDP is optimal if and only if $\phi_\alpha^{*}(z)\in A_\alpha(z)$ for all $z\in \mathbb{P}(\mathbb{X}).$
{(vi)} if $\bar{c}$ is inf-compact on $\mathbb{P}(\mathbb{X})\times\mathbb{A}$, then the functions ${\bar v}_{t,\alpha}$, $t=1,2,\ldots$, and ${\bar v}_\alpha$ are inf-compact on $\mathbb{P}(\mathbb{X})$. \end{theorem}
Theorem~\ref{teor4.3} establishes the existence of stationary optimal policies, validity of optimality equations, and convergence of value iterations to optimal values under the following natural conditions: (i) Assumption ({\bf D}) or ({\bf P}) and the function $\bar c$ is $K$-inf-compact, and (ii) the stochastic kernel $q$ on $\mathbb{P}(\mathbb{X})$ given $\mathbb{P}(\mathbb{X})\times A$ is weakly continuous. Theorems~\ref{th:wstar} and \ref{t:totalvar} provide sufficient conditions for (i) and (ii) respectively in terms of the properties of the cost function $c$ and stochastic kernels $P$ and $Q$.
\begin{theorem}\label{th:wstar} {\rm (Feinberg et al. \cite[Theorem~3.4]{FKZ}).}
If the stochastic kernel $P(dx'|x,a)$ on $\mathbb{X}$ given $\mathbb{X}\times\mathbb{A}$ is weakly continuous and the cost function $c:\mathbb{X}\times\mathbb{A}\to {\bar \mathbb{R}}^1$ is bounded below and $\mathbb{K}$-inf-compact on $\mathbb{X}\times\mathbb{A}$, then the cost function $\bar{c}:\mathbb{P}(\mathbb{X})\times\mathbb{A}\to{\bar \mathbb{R}}^1$ defined for the COMDP in (\ref{eq:c}) is bounded from below by the same constant as $c$ and $\mathbb{K}$-inf-compact on $\mathbb{P}(\mathbb{X})\times\mathbb{A}$. \end{theorem}
\begin{theorem}\label{t:totalvar} {\rm (Feinberg et al. \cite[Theorem~3.7]{FKZ}).}
The weak continuity of the stochastic kernel $P(dx'|x,a)$ on $\mathbb{X}$
given $\mathbb{X}\times\mathbb{A}$ and continuity in the total variation of the stochastic kernel $Q(dy|a,x)$ on $\mathbb{Y}$ given $\mathbb{A}\times\mathbb{X}$ imply that the stochastic kernel $q(dz'|z,a)$ on $\mathbb{P}(\mathbb{X})$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}$ is weakly continuous. \end{theorem}
The following assumption, that has similarities with (\ref{result}), and theorem are used in Feinberg et al. \cite{FKZ} to prove Theorem~\ref{t:totalvar}.
\noindent\textbf{Assumption {\bf(H)}}. There exists a stochastic kernel $H$ on $\mathbb{X}$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}\times\mathbb{Y}$ satisfying (\ref{3.4}) such that: if a sequence $\{z^{(n)}\}_{n=1,2,\ldots}\subseteq\mathbb{P}(\mathbb{X})$ converges weakly to $z\in\mathbb{P}(\mathbb{X})$, and a sequence $\{a^{(n)}\}_{n=1,2,\ldots}\subseteq\mathbb{A}$ converges to $a\in\mathbb{A}$ as $n\to\infty$, then there exists a subsequence $\{(z^{(n_k)},a^{(n_k)})\}_{k=1,2,\ldots}\subseteq \{(z^{(n)},a^{(n)})\}_{n=1,2,\ldots}$ and a measurable subset $C$ of
$\mathbb{Y}$ such that $R'(C|z,a)=1$ and for all $y\in C$ \begin{equation}\label{eq:ASSNH} H(z^{(n_k)},a^{(n_k)},y)\mbox{ converges weakly to }H(z,a,y).
\end{equation}
In other words, \eqref{eq:ASSNH} holds $R'(\,\cdot\,|z,a)$-almost surely.
According to the following theorem, if the stochastic kernel $R'$ is setwise continuous and Assumption~{\bf(H)} holds, then the stochastic kernel $q$ is weakly continuous. According to Feinberg et al.~\cite[Theorem 3.7]{FKZ}, weak continuity of the stochastic kernel $P$ and continuity of the observation stochastic kernel $Q$ in the total variation imply that the stochastic kernel $R'$ is setwise continuous and Assumption~{\bf(H)} holds. Another sufficient condition for weak continuity of $q$ is that there is a weakly continuous version of a stochastic kernel $H$ on $\mathbb{X}$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}\times\mathbb{Y}$; see Striebel~\cite{St} and Hern\'andez-Lerma~\cite{HL}. However, this condition may not hold for a POMDP with a weakly continuous stochastic kernel $P$ and a observation stochastic kernel $Q$ continuous in the total observation; see Feinberg et al.~\cite[Example 4.2]{FKZ}.
\begin{theorem}\label{th:contqqq2} {\rm (Feinberg et al. \cite[Theorem~3.5]{FKZ}).} If the stochastic kernel $R'(dy|z,a)$ on $\mathbb{Y}$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}$ is setwise continuous and Assumption~{\bf(H)}
holds, then the stochastic kernel $q(dz'|z,a)$ on $\mathbb{P}(\mathbb{X})$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}$ is weakly continuous. \end{theorem}
In addition to Theorem~\ref{t:totalvar}, that provides the sufficient condition of weak continuity of a stochastic kernel $q$ in terms of transition and observation probabilities $P$ and $Q,$ and to Theorem~\ref{th:contqqq2}, that provides the sufficient condition of weak continuity of a stochastic kernel $q$ in terms of stochastic kernels $R'$ and $H,$ a sufficient condition can be formulated in terms of the stochastic kernel $R$ on $\mathbb{X}\times\mathbb{Y}$ given ${\mathbb{P}}(\mathbb{X})\times \mathbb{A}$, defined in (\ref{3.3}). For each $B\in\tau(\mathbb{X})$ consider the family of functions
\[\mathcal{R}_B=\{ {\mathbb{P}}(\mathbb{X})\times \mathbb{A}\to R(B\times C|z,a):\, C\in \tau(\mathbb{Y})\}\] mapping ${\mathbb{P}}(\mathbb{X})\times \mathbb{A}$ into $[0,1]$. \begin{theorem}\label{teor:Rtotvar} Let the topology on $\mathbb{X}$ have a countable base $\tau_b^\mathbb{X}$ with the following two properties: \begin{itemize} \item[(a)]$\mathbb{X}\in\tau_b^\mathbb{X}$, \item[(b)] for each finite intersection $\mathcal{O}=\cap_{i=1}^ k {\mathcal{O}}_{i}$ of sets $\mathcal{O}_{i}\in\tau_b^\mathbb{X},$ $i=1,2,\ldots,k,$ the family of functions $\mathcal{R}_\mathcal{O}$ is equicontinuous at all the points $(z,a)\in \mathbb{P}(\mathbb{X})\times\mathbb{A}$. \end{itemize} Then the following two statements take place: \begin{itemize}
\item[(i)] the stochastic kernel $R'(dy|z,a)$ on $\mathbb{Y}$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}$ is continuous in the total variation, and Assumption~{\bf(H)} holds;
\item[(ii)] the stochastic kernel
$q(dz'|z,a)$ on $\mathbb{P}(\mathbb{X})$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}$ is weakly continuous.\end{itemize} \end{theorem} \begin{proof} (i) The equicontinuity at all the points $(z,a) \in \mathbb{P}(\mathbb{X}) \times \mathbb{A}$ of the family of functions $\mathcal{R}_{\mathcal{O}}$ defined on $\mathbb{P}(\mathbb{X}) \times \mathbb{A}$, being applied to $\mathcal{O} = \mathbb{X},$ implies that the stochastic kernel $R'$ on $\mathbb{X}$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}$ is continuous in the total variation. Theorem~\ref{mainthkern}, being applied to the Borel subsets of Polish spaces $\mathbb{S}_1=\mathbb{X},$ $\mathbb{S}_2=\mathbb{Y},$ and $\mathbb{S}_3=\mathbb{P}(\mathbb{X})\times\mathbb{A},$ yields that Assumption~({\bf H}) holds. (ii) Since the continuity of $R'$ in the total variations implies its setwise continuity, the statement follows from statement (i) and Theorem~\ref{th:contqqq2}. \end{proof}
The following theorem completes the descriptions of the relations between the assumptions of Theorems~\ref{t:totalvar}--\ref{teor:Rtotvar}. Among these three groups of assumptions, the assumptions of Theorem~\ref{th:contqqq2} are the most general, and they follow from the assumptions of Theorem~\ref{teor:Rtotvar}, which in its turn follow from the assumptions of Theorem~\ref{t:totalvar}.
\begin{theorem}\label{t:tvimplr} If
the stochastic kernel $P(dx'|x,a)$ on $\mathbb{X}$
given $\mathbb{X}\times\mathbb{A}$ is weakly continuous and the stochastic kernel $Q(dy|a,x)$ on $\mathbb{Y}$ given $\mathbb{A}\times\mathbb{X}$ is continuous in the total variation, then the assumptions of Theorem~\ref{teor:Rtotvar} hold. \end{theorem} \begin{proof} In view of Feinberg et al.~\cite[Lemma 5.3]{FKZ}, the family of function $\mathcal{R}_{\mathcal{O}_1\setminus\mathcal{O}_2}$ is equicontinuous for two arbitrary open subsets $\mathcal{O}_1$ and $\mathcal{O}_2$ in $\mathbb{X}.$ By setting $\mathcal{O}_2=\emptyset,$ this result implies that the family of functions $\mathcal{R}_{\mathcal{O}}$ is equicontinuous for each open subset $\mathcal{O}$ in $\mathbb{X}.$ Since we endowed $\mathbb{X}$ with the induced topology from a separable metric space, its topology has a countable base which is closed according to the finite intersections.
Therefore, this countable base of the topology on $\mathbb{X}$ satisfies assumptions of Theorem~\ref{teor:Rtotvar}. \end{proof}
Observe that Theorem~\ref{t:totalvar} follows from Theorems~\ref{teor:Rtotvar} and~\ref{t:tvimplr}. The following theorem provides sufficient conditions for the existence of optimal policies for the COMDP. Its first statement is Theorem~\ref{t:totalvar}, which is repeated for completeness of the statements.
\begin{theorem}\label{main} {\rm (Feinberg et al. \cite[Theorem~3.6]{FKZ}).} Let either Assumption {\rm\bf({\bf D})} or Assumption {\rm\bf({\bf P})} hold. If the function $c$ is $\mathbb{K}$-inf-compact on $\mathbb{X}\times\mathbb{A}$ then each of the following conditions: \begin{itemize}
\item[(i)] the stochastic kernel $P(dx'|x,a)$ on $\mathbb{X}$ given $\mathbb{X}\times\mathbb{A}$ is weakly continuous, and the stochastic kernel
$Q(dy|a,x)$ on $\mathbb{Y}$ given $\mathbb{A}\times\mathbb{X}$ is continuous in the total variation; \item[(ii)] the assumptions of Theorem~\ref{teor:Rtotvar} hold;
\item[(iii)] the stochastic kernel $R'(dy|z,a)$ on $\mathbb{Y}$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}$ is setwise continuous and Assumption~{\bf(H)} holds, \end{itemize} implies that the COMDP $(\mathbb{P}(\mathbb{X}),\mathbb{A},q,\bar{c})$ satisfies {Assumption {\rm\bf(${\rm \bf W^*}$)}}, and therefore statements (i)--(vi) of Theorem~\ref{teor4.3} hold. \end{theorem} \begin{proof} Theorem~\ref{th:wstar} implies that the cost function $\bar{ c}$ for the COMDP is bounded below and $\mathbb{K}$-inf-compact on $\mathbb{P}(\mathbb{X})\times \mathbb{A}$. Weak continuity of the stochastic kernel $q$ on $\mathbb{P}(\mathbb{X})$ given $\mathbb{P}(\mathbb{X})\times \mathbb{A}$ follows from Theorems~\ref{t:totalvar}--\ref{teor:Rtotvar}. \end{proof}
Example~4.1 from Feinberg et al. \cite{FKZ} demonstrates that, if the stochastic kernel $Q(dy|a,x)$ on $\mathbb{Y}$ given $\mathbb{A}\times\mathbb{X}$ is setwise continuous, then the transition probability $q$ for the COMDP may not be weakly continuous in $(z,a)\in\mathbb{P}(\mathbb{X})\times\mathbb{A}$. In that example the state set consists of two points. Therefore, if the stochastic kernel $P(dx'|x,a)$ on $\mathbb{X}$ given $\mathbb{X}\times\mathbb{A}$ is setwise continuous (even if it is continuous in the total variation) in $(x,a)\in\mathbb{X}\times\mathbb{A}$ then the setwise continuity of the stochastic kernel $Q(dy|a,x)$ on $\mathbb{Y}$ given $\mathbb{A}\times\mathbb{X}$ is not sufficient for the weak continuity of $q$.
\section{Markov Decision Models with Incomplete Information}\label{S5} Consider a Markov decision model with incomplete information (MDMII); Dynkin and Yushkevich~\cite[Chapter 8]{DY}, Rhenius~\cite{Rh}, Yushkevich~\cite{Yu} (see also Rieder \cite{Ri} and B\"auerle and Rieder~\cite{BR} for a version of this model with transition probabilities having densities). This model is defined by an \textit{observed state space} $\mathbb{Y}$, an \textit{unobserved state space} $\mathbb{W}$, an \textit{action space} $\mathbb{A}$, nonempty \textit{sets of available actions} $A(y),$ where $y\in\mathbb{Y}$, a stochastic kernel $P$ on $\mathbb{Y}\times\mathbb{W}$ given $\mathbb{Y}\times\mathbb{W}\times\mathbb{A}$, and a one-step cost function $c:\, G\to {\bar \mathbb{R}}^1,$ where $G=\{(y,w,a)\in \mathbb{Y}\times\mathbb{W}\times\mathbb{A}:\, a\in A(y)\}$ is the graph of the mapping $A(y,w)=A(y),$ $(y,w)\in \mathbb{Y}\times\mathbb{W}.$ Assume that:
(i) $\mathbb{Y}$, $\mathbb{W}$ and $\mathbb{A}$ are Borel subsets of Polish spaces. For all $y\in \mathbb{Y}$ a nonempty Borel subset $A(y)$ of $\mathbb{A}$ represents the \textit{set of actions} available at $y;$
(ii) the graph of the mapping $A:\mathbb{Y}\to 2^\mathbb{A}$, defined as $ {\rm Gr} ({A})=\{(y,a) \, : \, y\in \mathbb{Y}, a\in A(y)\}$ is measurable, that is, ${\rm Gr}(A)\in {\mathcal B}(\mathbb{Y}\times\mathbb{A})$, and this graph allows a measurable selection, that is, there exists a measurable mapping $\phi:\mathbb{Y}\to \mathbb{A}$ such that $\phi(y)\in A(y)$ for all $y\in \mathbb{Y}$;
(iii) the stochastic kernel $P$ on $\mathbb{X}$ given $\mathbb{Y}\times\mathbb{W}\times\mathbb{A}$ is weakly continuous in $(y,w,a)\in \mathbb{Y}\times\mathbb{W}\times\mathbb{A}$;
(iv) the one-step cost function $c$ is $\mathbb{K}$-inf-compact on $G$, that is, for each compact set $K\subseteq\mathbb{Y}\times\mathbb{W}$ and for each $\lambda\in \mathbb{R}^1$, the set ${\cal D}_{K,c}(\lambda)=\{(y,w,a)\in G:\, c(y,w,a)\le\lambda\}$ is compact.
Let us define $\mathbb{X}=\mathbb{Y}\times\mathbb{W},$ and for $x=(y,w)\in \mathbb{X}$ let us define $Q(C|x)= {\bf I}\{y\in C\}$ for all $C\in {\cal B}(\mathbb{Y}).$ Observe that this $Q$ corresponds to the continuous function $y=
F(x),$ where $F(y,w)=y$ for all $x=(y,w)\in\mathbb{X}$ (here $F$ is a projection of $\mathbb{X}=\mathbb{Y}\times\mathbb{W}$ on $\mathbb{Y}$). Thus, as explained in Example~4.1 from Feinberg et al. \cite{FKZ}, the stochastic kernel $Q(dy|x)$ is weakly continuous in $x\in\mathbb{X}.$ Then by definition, an MDMII is a POMDP with the state space $\mathbb{X}$, observation set $\mathbb{Y}$, action space $\mathbb{A}$, available action sets $A(y)$, stochastic kernel $P$, observation kernel $Q(dy|a,x):=Q(dy|x)$, and one-step cost function $c$. However, this model differs from our basic definition of a POMDP because action sets $A(y)$ depend on observations and one-step costs $c(x,a)=c(y,w,a)$ are not defined when $a\notin A(y).$ To avoid this difficulty, we set $c(y,w,a)=+\infty$ when $a\notin A(y)$. The extended function $c$ is $\mathbb{K}$-inf-compact on $\mathbb{X}\times\mathbb{A}$ because the set ${\cal D}_{K,c}(\lambda)$ remains unchanged for each $K\subseteq\mathbb{Y}\times\mathbb{W}$ and for each $\lambda\in\mathbb{R}^1.$
Thus, an MDMII is a special case of a POMDP $(\mathbb{X},\mathbb{Y}, \mathbb{A},P,Q,c)$, when $\mathbb{X}=\mathbb{Y}\times\mathbb{W}$ and the observation kernel $Q$ is defined by the projection of $\mathbb{X}$ on $\mathbb{Y}.$ The observation stochastic kernel
$Q(\,\cdot\,|x)$ is weakly continuous in $x\in \mathbb{X}$. This is weaker that the continuity of $Q$ in the total variation that, according to Theorem~\ref{main}, ensures weak continuity of the stochastic kernel for the COMDP and the existence of optimal policie. Indeed, Feinberg et al. \cite[Example 8.1]{FKZ} demonstrates that even under the stronger assumption, that $P$ is setwise continuous, the corresponding stochastic kernel $q$ on $\mathbb{P}(\mathbb{X})$ given $\mathbb{P}(X)\times\mathbb{A}$ may not be weakly continuous.
The natural question is: which conditions are sufficient for the existence of optimal policies for the MDMII? Since an MDMII is a particular POMDP, the existence of optimal policies for an MDMII is equivalent to the existence of optimal policies for the COMDP corresponding to this MDMII. Theorem~\ref{teor4.3} gives an answer in a general form by stating that such conditions are the week continuity of the transition probability $q$ of the corresponding COMDP and the $\mathbb{K}$-inf-compactness of the cost function $\bar c$ for the COMDP. The following theorem provides a sufficient condition for the weak continuity of $q$.
For each open set $\mathcal{O}$ in $\mathbb{W}$ consider the family of functions
$\mathcal{P}^*_\mathcal{O}=\{ (x,a)\to P(C\times\mathcal{O}|x,a):\, C\in \tau(\mathbb{Y})\}$ mapping $\mathbb{X}\times\mathbb{A}$ into $[0,1]$.
\begin{theorem}\label{t:totalvar1} Let the topology on $\mathbb{W}$ have a countable base $\tau_b^\mathbb{W}$ satisfying the following two conditions: \begin{itemize}
\item[(i)] $\mathbb{W}\in\tau_b^\mathbb{W},$
\item[(ii)] for each finite intersection $\mathcal{O}=\cap_{i=1}^ k {\mathcal{O}}_{i}$ of sets $\mathcal{O}_{i}\in\tau_b^\mathbb{W},$ $i=1,2,\ldots,k,$ the family of functions $\mathcal{P}^*_\mathcal{O}$ is equicontinuous at all the points $(x,a)\in \mathbb{X}\times\mathbb{A}$. \end{itemize} Then the stochastic kernel
$q(dz'|z,a)$ on $\mathbb{P}(\mathbb{X})$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}$ is weakly continuous. \end{theorem}
\begin{proof} Let $\tau_b^\mathbb{Y}$ be a countable base of the topology on $\mathbb{Y}$ closed with respect to the finite intersections. Such base exists, because $\mathbb{Y}$ is the separable metric space. Since finite intersections of elements of the base $\tau_b^\mathbb{W}$ are open sets, let us choose $\tau_b^\mathbb{W}$ in a way that finite intersections of elements of $\tau_b^\mathbb{W}$ belong to $\tau_b^\mathbb{W}.$ Then $\tau_b^\mathbb{X}:=\{\mathcal{O}_\mathbb{Y}\times\mathcal{O}_\mathbb{W}:\, \mathcal{O}_\mathbb{Y}\in\tau_b^\mathbb{Y},\, \mathcal{O}_\mathbb{W}\in\tau_b^\mathbb{W} \}$ is the countable base of the topology on $\mathbb{X} =\mathbb{Y}\times \mathbb{W}$ defined by the products of the topologies on $\mathbb{Y}$ and $\mathbb{W}$ and for any finite tuples of open sets $\{\mathcal{O}_\mathbb{Y}^{(j)}\}_{j=1}^{N}$ in $\mathbb{Y}$ and $\{\mathcal{O}_\mathbb{W}^{(j)}\}_{j=1}^{N}$ in $\mathbb{W}$, $N=1,2,\ldots,$ their finite intersections $\cap_{j=1}^N\mathcal{O}_\mathbb{Y}^{(j)}$ and $\cap_{j=1}^N\mathcal{O}_\mathbb{W}^{(j)}$ are open in $\mathbb{Y}$ and $\mathbb{W}$ respectively. Moreover, $\cap_{j=1}^N \left(\mathcal{O}_\mathbb{Y}^{(j)}\times \mathcal{O}_\mathbb{W}^{(j)}\right)= \left(\cap_{j=1}^N\mathcal{O}_\mathbb{Y}^{(j)}\right)\times \left(\cap_{j=1}^N\mathcal{O}_\mathbb{W}^{(j)}\right)\in\tau_b^\mathbb{X}$ for any finite tuples of open sets $\{\mathcal{O}_\mathbb{Y}^{(j)}\}_{j=1}^{N}$ from $\tau_b^\mathbb{Y}$ and $\{\mathcal{O}_\mathbb{W}^{(j)}\}_{j=1}^{N}$ from $\tau_b^\mathbb{W}$. From (\ref{3.3}) it follows that \[
R(C_1\times B\times C_2|z,a)=\int_{\mathbb{X}}P((C_1\cap C_2)\times B|x,a)z(dx),\quad\ B\in \mathcal{B}(\mathbb{W}),\ C_1,C_2\in \mathcal{B}(\mathbb{Y}),\ z\in\mathbb{P}(\mathbb{X}),\ a\in \mathbb{A}, \] \[
R'(C|z,a)=\int_{\mathbb{X}}P(C\times \mathbb{W}|x,a)z(dx),\qquad
C\in \mathcal{B}(\mathbb{Y}),\ z\in\mathbb{P}(\mathbb{X}),\ a\in \mathbb{A}.\] For any nonempty open sets $\mathcal{O}_\mathbb{Y}\in \tau_b^\mathbb{Y}$ and $\mathcal{O}_\mathbb{W}\in \tau_b^\mathbb{W}$ respectively, Theorem~\ref{kern}, with $\mathbb{S}_1 = \mathbb{P}(\mathbb{X})$, $\mathbb{S}_2 = \mathbb{X}$,
$\mathbb{S}_3 = \mathbb{A}$, $\mathcal{O} = \mathbb{X}$, $\Psi(B | z) = z(B)$, and $\mathcal{A}_0 =
\{(x,a) \to P((\mathcal{O}_\mathbb{Y}\cap C) \times \mathcal{O}_\mathbb{W})|x,a): C \in \tau(\mathbb{Y})\}$, implies the equicontinuity of the family of functions \[ \mathcal{R}_{\mathcal{O}_\mathbb{Y}\times\mathcal{O}_\mathbb{W}}=\left\{(z,a)\to R(\mathcal{O}_\mathbb{Y}\times
\mathcal{O}_\mathbb{W}\times C|z,a)\,:\, C\in\tau(\mathbb{Y})\right\}, \] defined on $\mathbb{P}(\mathbb{X}) \times \mathbb{A}$, at all the points $(z,a) \in \mathbb{P}(\mathbb{X}) \times \mathbb{A}$.
Therefore, Theorem~\ref{teor:Rtotvar}(ii) yields that
the stochastic kernel $q(dz'|z,a)$ on $\mathbb{P}(\mathbb{X})$ given $\mathbb{P}(\mathbb{X})\times\mathbb{A}$ is weakly continuous. \end{proof}
Assumptions of Theorem~\ref{t:totalvar1} are weaker than equicontinuity at all the points $(x,a)\in \mathbb{X}\times\mathbb{A}$ of the family of functions $\mathcal{P}_\mathcal{O}$ for all open sets $\mathcal{O}$ in $\mathbb{W}$ (see Example~\ref{exa:MDM} above), which in its turn is a weaker assumption than the continuity of the stochastic kernel $P$ on $\mathbb{X}$ given $\mathbb{X}\times\mathbb{A}$ in the total variation.
The following theorem states sufficient conditions for the existence of optimal policies for MDMIIs, the validity of optimality equations, and convergence of value iterations to optimal values. Theorem~\ref{teor:Ren} generalizes \cite[Theorem 8.2]{FKZ}, where the equicontinuity at all the points $(x,a)\in \mathbb{X}\times\mathbb{A}$ of the family of functions $\mathcal{P}^*_\mathcal{O}$ for all open sets $\mathcal{O}$ in $\mathbb{W}$ is assumed.
\begin{theorem}\label{teor:Ren} Let either Assumption~{\rm\bf(D)} or Assumption~{\rm\bf(P)} hold, and let the cost function $c$ be $\mathbb{K}$-inf-compact on $G$.
If the topology on $\mathbb{W}$ has a countable base $\tau_b^\mathbb{W}$ satisfying assumptions (i) and (ii) of Theorem~\ref{t:totalvar1},
then the COMDP $(\mathbb{P}(\mathbb{X}),\mathbb{A},q,\bar{c})$ satisfies {\rm Assumption \textbf{(${\rm \bf W^*}$)}}, and therefore the conclusions of Theorem~\ref{teor4.3} hold. \end{theorem} \begin{proof} Assumption \textbf{(${\rm \bf W^*}$)}(i) follows from Corollary~\ref{teor:2} and Theorem~\ref{th:wstar}. Assumption \textbf{(${\rm \bf W^*}$)}(ii) follows from Theorem~\ref{t:totalvar1}. Therefore, the COMDP $(\mathbb{P}(\mathbb{X}),\mathbb{A},q,\bar{c})$ satisfies {\rm Assumption \textbf{(${\rm \bf W^*}$)}} and the conclusions of Theorem~\ref{teor4.3} hold. \end{proof}
{\bf Acknowledgements.} The authors thank M. Mandava for providing useful remarks. The research of the first author was partially supported by NSF grant CMMI-1335296.
\end{document} | arXiv |
How many non-congruent right triangles are there, all of whose sides have positive integer lengths, and one of whose legs (i.e. not the hypotenuse) has length $162$?
Let $x$ be the length of the hypotenuse, and let $y$ be the length of the other leg. Then we have $x^2-y^2=162^2$. Factoring both sides gives $(x+y)(x-y)=(2\times3^4)^2=2^2\times3^8$. A pair of positive integers $(x,y)$ gives a solution to this equation if and only if $(x+y)$ and $(x-y)$ are factors whose product is $2^2*3^8$. For positive integers $a$ and $b$, the equations $x+y=a$ and $x-y=b$ have positive integer solutions if and only if $a-b$ is an even positive integer. Thus if $ab=2^2*3^8$ and the difference between $a$ and $b$ is even, then we get a valid triangle with $x+y=a$ and $x-y=b$. Since $ab$ is even, at least one of the factors is even, and since their difference is even, the other must be as well. Since $x+y>x-y$ we have $a>b$ i.e. $a>2\times3^4.$ Since the prime factorization of $a$ must have exactly one $2$, the choices for $a$ that give valid triangles are $2\times3^5,2\times3^6,2\times3^7,2\times3^8.$ Thus there are $\boxed{4}$ valid triangles. | Math Dataset |
Concept for Donor Coordination
Published: 2016-01-23 Tags: Coordination, Computer science, Master's thesis, Effective altruism
This is a proposal for a donor coordination system that aims to empower donors to harness the risk neutrality that stems from their combined work toward agent-neutral goals.
Dated Content
I tend to update articles only when I remember their content and realize that I want to change something about it. But I rarely remember it well enough once about two years have passed. Such articles are therefore likely to contain some statements that I no longer espouse or would today frame differently.
Donor Coordination
Irrational Risk-Aversion
Competition for Exposure
High-Level Goals
Functional Requirements
Challenges and Proposed Solutions
Funding Gaps
Donation Swaps
Moral Lies
This proposal is meant to encourage comments on its content as well as comments along the lines of "I would use this," because without many of those it will not seem like a worthwhile undertaking to implement it.
One problem that GiveWell has struggled with emerges when two donors are not fully value-aligned but can agree on wanting to fund one of GiveWell's top charities. The result is that they wait each other out, a deadlock, both wanting the other to fund the GiveWell charity, because they value the other's counterfactual use of the donation lower than their own. GiveWell is regularly refining its response to this problem.
For this problem to become relevant, there need to be at least two large donors or monolithic groups of donors, where large means that their planned donations are close to – for example within the same order of magnitude of – the funding gap of the charities in question. This is a good problem to have.
More commonly, however, the funding gap is large compared to the potential individual donations (where individual is meant to exclude the aforementioned monolithic groups of donors), so that the above problem becomes an edge case while centrally we face a different problem. Donors that focus their contributions on charities that have a significant evidence base and track record for impact – a large part of the "GiveWell market" – are often accused of being too focused on just these established charities thereby missing small high-impact opportunities from nonprofit startups or projects that will stay small or short-lived by design.
The distinction is similar to that between, on the one hand, passive investors that buy exchange-traded funds (ETFs) of, for example, the top 30 (DAX) or 500+ (S&P 500) companies in order to hold them, and, on the other hand, business angels or venture capitalists that invest into startups. The first group has excellent information to make relatively low risk–low return investments; the second group has to rely on rough heuristics, such as their faith into the founders, to make high risk–high return investments – of which they need to be able to make many in order to profit at least fairly reliably.
But a profit motive is an agent-relative goal. Investors (such as donors) with agent-neutral goals that are shared by at least a few others have much better opportunities for cooperation. These have largely not been tapped into. While Net Analytics is clearly focused on the low risk–low return market, this high risk–high return market also calls for a software solution to its coordination problem.
The central motivating problems are the following:
Drops in marginal utility of a resource suggest risk aversion. In that context it is rational to prefer a low return with high probability to a high return with low probability at the same expected value. In the context of altruistic interventions,1 the utility of marginal donations only noticeably decreases when it reaches the area of millions of dollars, some of the GiveWell top charities' funding gaps. Since few donors have funds of that magnitude at their disposal, most risk aversion of average donors is disproportionate.
At the same time there are many donors that see a high likelihood that effective interventions are possible in a certain cause area. Unfortunately, these intervention are, by necessity, more speculative than, for example, the interventions GiveWell prioritizes. Yet there are charity startups implementing them.
The funding gaps of these charities tend to be too small for any respectable prioritization organization, like GiveWell, to warrant investing staff time into evaluating them, so donors are left to their own devices.2
When donors consider these charities, they are usually still optimistic that donating to them does yield superior impact, but they have a much harder time prioritizing between them because their central metric just remains how well they implement very similar interventions. It is well possible that the differences between these charities – charities that many impact-oriented donors are actively considering – is small enough that the value of the information would not warrant its cost.
Unfortunately, some of these donors fall into a form of analysis paralysis at this point and rather donate to the charities whose lower impact is well proven. Other donors react more rationally and donate rather arbitrarily within the group of the most highly effective charities. Others again use questionable heuristics, often aware that they are likely to be unreliable but also aware of the presumably low value of information of more thorough investigations. I aver that none of these strategies is optimal.
The other side of the medal is that charities are aware of these dynamics. While their values may be aligned, for funding they are yet each dependent on its own pool of donors, and any cross-promotion of another charity among the first charity's own donor base may lead to donors shifting their support to the endorsed organization. This behavior stifles cooperation.
The solution presented here will instead allow all charities in a program area to fill their funding gaps to similar degrees. If a sufficient number of donors come to accept this solution, any incentive for charities to engage in uncooperative behavior will be diminished.
Donor Coordination (working title) is a software system and strategy that fosters cooperation between value-aligned donors by allowing them do make large contributions in teams and donate to whole program areas rather than individual nonprofits. It can improve upon the current state if it is accepted and trusted by a sufficient number of donors.
The donor coordination solution can be considered successful when it achieves the following goals:
Team-level atomicity
Donors can choose portfolios with whom they are value-aligned to the point that they perceive their donations as coming from the team of donors that invests into that portfolio rather than them personally.
Program-level atomicity
Donors can choose charity portfolios that, as a whole, represent their moral preferences well enough that they perceive their team as donating to a program area rather than an individual charity.
Charities are value-aligned with the organization their donations are fungible with to the point that they make fully altruistic statements about their funding gaps.
The donor coordination solution should likely take the shape of a web application to enable users of any platform to use it. The idea is roughly inspired by Wikifolio.
An unauthenticated person viewing the website.
A donor, a charity, or an administrator.
The portfolio is an allocation rule that partitions funds among a set of charities. Every user can create portfolios, favorite or watch portfolios, and donate to portfolios.
The donor is a user other than a charity.
A charity is a user that only has the ability to enter some meta data about itself and its funding gap, and participate in discussions.
The interests of beneficiaries are:
Beneficiaries want to maximize the available funding toward the their preferences.
Beneficiaries want the, at the margin, most effective interventions to receives maximal funding.
Beneficiaries want the funding gaps of the most effective interventions to be greater than or equal to the available funding.
In some cases the beneficiaries can give direct input, but in many cases their interests need to be represented by donors and charities because they have insufficient levels of intelligence to express them efficiently or are not yet born.
Hence the interests of donors are:
Donors want to maximize the available funding toward the their moral goals.
Donors want the, at the margin, most effective interventions realizing these moral goals to receive maximal funding.
Donors want the funding gaps of the most effective interventions realizing these moral goals to be greater than or equal to the available funding.
Hence the interests of the charities are:
Charities want to maximize the available funding toward the charity's moral goals.
Charities want the, at the margin, most effective interventions realizing these moral goals to receive maximal funding.
Charities want the funding gaps of the most effective interventions realizing these moral goals to be greater than or equal to the available funding.
The main difference between donors and charities as two groups is the direction of the money flow. The main difference between the donors and charities internally is their different moral goal makeup.
From these primary interests follow proximate interests for value-aligned teams of donors (all donors to a program area as defined by a public portfolio):
Being value aligned, the members of a team are happy to make their donations fungible with the donations of all other members of the team.
Since their value alignment with other teams varies, there may be teams with partially opposing moral goals. Teams will want to minimize fungibility with such teams.
Since the funding gaps of charities are limited, teams also want to increase the funding gap of their program area by broadening its scope.
Analogously for charities:
The charities of popular portfolios are likely to be highly value aligned and thus happy to calculate their funding gaps cooperatively.
Since their value alignment with charities of other program areas varies, there may be portfolios of charities with partially opposing moral goals. Charities will want to increase their scale in order to be able to enter greater funding gaps so portfolio authors can minimize fungibility with such opposing program areas.
Clearly, the last two interests of the donor teams are in conflict. Small donation flows will favor portfolios of small, pure clusters of charities while greater donation flows will necessitate compromise in order to form greater, less pure clusters with larger funding gaps.
For simplicity I assume that all donors are perfectly informed and their only differences are differences of value alignment. This is unlikely to be the case in practice, but the only difference between a donor that is not value aligned and a donor that acts as if they were not value aligned because of lacking information is that the latter can be educated.
This educational mission is without the purview of Donor Coordination, but the software should provide the platform that donors will need to educate each other because this may be important for fostering user activity.
Visitors can create donor accounts.
Administrators can create administrator accounts.
Administrators can create charity accounts.
Visitors can view public portfolios including their descriptive statistics.
Donors can add public portfolios to their watch list.
Donors can donate to public portfolios.
Donors can author public portfolios.
Donors can draft and test portfolios in a private or draft state.
Donors can comment on portfolios.
Charities can enter new funding gaps for themselves.
Charities can enter new system-external donation flows.
Administrators have all privileges.
At some point moderator accounts will become necessary, so moderators do not need to enjoy the same level of trust as administrators to contribute to the community maintenance.
The functional requirements mention descriptive statistics. These are important for portfolio authors and other donors to decide how to structure a portfolio so not to duplicate very similar ones or which portfolio to donate to. At least two metrics are required:
The sum of the funding gaps of the charities in a portfolio \(P\), \(\operatorname{gap}(P) = \sum\limits_{c}^{P} \operatorname{gap}(c)\).
A ranked list of the portfolios with the highest fungibility but lowest similarity. One idea may be the quotient, \(\operatorname{compromise}(P, P') = \frac{\operatorname{fungibility}(P, P')}{\operatorname{similarity}(P, P')}\), of the following metrics:
\(\operatorname{fungibility}(P, P') = \sum\limits_{c}^{P \cap P'} \operatorname{gap}(c)\)
\(\operatorname{similarity}(P, P') = |\bigcup\limits_{c}^{P} \operatorname{donors}(c) \cap \bigcup\limits_{c}^{P'} \operatorname{donors}(c)|\).
The fungibility and similarity metrics should also be displayed in isolation, particularly as a guide for authors of portfolios of new charities when the combined compromise metric is undefined.
It may also become necessary to take weights into account, and the formulas will surely need to be tweaked further once real data become available.
There needs to be a common definition of a funding gap, so that charities have hard, unyielding guidelines as to what figure to enter for a given year.
Prioritization organizations already face a similar problem: Imagine two charities, charity A with the ability to invest $100 million with some baseline effectiveness \(e\) on average and charity B with the ability to invest $10 million with an average effectiveness of \(10e\) within a given year. Further assume that the charities are value aligned to simplify the problem to one dimension of impact.
A commonly used uncertainty discount is 3% p.a. and for simplicity we assume that suffering in the world, absent the interventions, remains constant, so that aggregate suffering increases linearly over time.
A donor that wants to invest $100 million now has the choice to donate it to charity A, knowing that it will be invested in the same year, or to charity B, knowing that $10 million of it will be invested in the same year, $90 million of it will wait on the charity's bank account at an interest rate of maybe 1% for another year, $80 million plus interest will wait for two years, and so on.
Clearly, a definition of funding gaps that only takes into account a charity's ability to invest some amount per year would set very different bars for the marginal impact of the last dollar of that funding gap.
Since the donor coordination solution addresses coordination problems that arise when the funding gaps of the individual charities do not warrant the attention of a prioritization organization, we assume that there are no meaningful differences of their relative effectiveness, so we face a simpler version of this problem.
One solution may be to adopt GiveWell's excess assets policy: "We seek to be in a financial position such that our cash flow projections show us having 12 months' worth of unrestricted assets in each of the next 12 months."
Another open question is the allocation of donations within the portfolio. Conceptually, donors donate to program areas, but factually they will have to transfer their donation to a specific organization. Splitting it up across several organizations would be an unnecessary hassle of the donor, so the algorithm that suggests the specific organization should know some ideal allocation and then recommend a recipient organization such that the actual allocation comes closest to the ideal allocation. It could also take tax deductibility into account as a tie breaker.
The simplest option might be an equitable allocation where the algorithm aims to assign the same level of funding to each charity after taking donations external to the system into account.
Another option may be to prioritize small funding gaps as an additional incentive for charities not to exaggerate their funding gaps in the moral lies scenario. However that would have little effect since the charities in a given program area are value aligned and can thus easily conspire with each other, and it may have the detrimental effect that charities would be incentivized to be tardy with entering new funding gaps.
Donors often agree on donation swaps where each partner donates to the charity of choice of the other partner in order to harness the tax deduction of the charity in the respective country.
In order to help portfolio authors to trade off fungibility against funding gaps, there would need to be a ranking of other portfolios that the given portfolio is most fungible with. However, portfolios whose audience is very similar are least interesting to portfolio authors, so the ranking should be sorted by something like the fungibility per cardinality of the cut set of donors, and here donation swaps would add noise to the calculation.
It needs to be either clear to the donors that they need to enter the donation of their swap partner as their donation or the software should allow them to mark donations as swaps and enter their partner. The first is probably the better solution for an MVP, but the second may be more foolproof.
When there is a pair of program areas such that the teams of each see the team of the other as an opposing team, but there is some set of charities that they can agree on, and the available funding is close to or greater than the available funding gaps of their program areas without the consensus charities, the intended result is that donors compromise and add charities to their portfolios that increase the funding gap at the cost of greater fungibility.
But charities are of course value aligned with these teams. Hence it will be ethical for them to lie about their funding gaps, inflating them, to drive the opposing donors to fund the more fungible funding gaps. Analogously, the opposing team's charities can also inflate their funding gaps; they even have to lest their cause suffer. When one group defects in such a fashion, the cooperation breaks down. A classical example of the prisoner's dilemma.
In practice, the donor coordination solution will be used mostly or at least at first only by donors that are all fairly value aligned at least to the extend that they value the type of moral plurality that exists among them. Hence this problem may not manifest any time soon.
My experience that such a software would be helpful is based on reports of friends, some of whom are donors and some employees of affected charities. Unless, however, there is a sizable number of prospective users that are interested in the project, charities will not have sufficient faith into the growth of the user base to warrant their time investments.
Apart from surveys among likely prospective users, one central market research tool needs to be a minimal viable product (MVP). Other donors and nonprofit staff have considered opening a group on a social network such as Facebook to bring together all participants in whose actions need coordination. The group would provide a means for communication but would leave any functions beyond that to the participants to be implemented in a manual, ad-hoc fashion. This way it will become clear which processes are in most urgent need of automation. It will also become clear if the community is large enough to sustain a more comprehensive solution like the one proposed here.
An important strategic and marketing problem is the following: Entering funding gaps will only warrant the effort for the charities if they can expect significant donation flows from the donor coordination solution. For donors the donor coordination solution is only interesting when the program areas they want to donate to are well represented by charities working on them.
One solution may be for administrators to regularly poll information on funding gaps from charities and invite them to claim their accounts themselves. That way, the administrators will have added effort during the startup phase, which will be increasingly outsourced to the charities as donors come to accept the system.
To achieve said donor acceptance, it would be helpful if the project were run by a reputable organization with considerable reach, and if the project collected early signups prior to its launch, both in order for it to launch with momentum. Until such an organization has been found, I cannot consider this challenge solved.
Please note that in the following I will use "intervention" and "program" semantically interchangeably conditional on which terms seems more idiomatic to me in the collocational context. ↩
"Respectable," here, is not meant to denigrate any other hypothetical prioritization organizations but rather meant as a handicap, since an organization that is highly respected has to go to great lengths to stress the low quality of its research when it wants to invest staff time proportionate to evaluating interventions with small funding gaps lest donors assume that the results are as reliable as other results the organization puts out. Taking such a risk is rarely warranted for such an organization. ↩
Chaining Retroactive Funders to Borrow Against Unlikely Utopias
Toward Impact Markets
SquigglyPy: Alpha Version of Squiggle for Python
How Might Better Collective Decision-Making Backfire? | CommonCrawl |
Positive and negative predictive values
The positive and negative predictive values (PPV and NPV respectively) are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative results, respectively.[1] The PPV and NPV describe the performance of a diagnostic test or other statistical measure. A high result can be interpreted as indicating the accuracy of such a statistic. The PPV and NPV are not intrinsic to the test (as true positive rate and true negative rate are); they depend also on the prevalence.[2] Both PPV and NPV can be derived using Bayes' theorem.
Although sometimes used synonymously, a positive predictive value generally refers to what is established by control groups, while a post-test probability refers to a probability for an individual. Still, if the individual's pre-test probability of the target condition is the same as the prevalence in the control group used to establish the positive predictive value, the two are numerically equal.
In information retrieval, the PPV statistic is often called the precision.
Definition
Positive predictive value (PPV)
The positive predictive value (PPV), or precision, is defined as
${\text{PPV}}={\frac {\text{Number of true positives}}{{\text{Number of true positives}}+{\text{Number of false positives}}}}={\frac {\text{Number of true positives}}{\text{Number of positive calls}}}$
where a "true positive" is the event that the test makes a positive prediction, and the subject has a positive result under the gold standard, and a "false positive" is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard. The ideal value of the PPV, with a perfect test, is 1 (100%), and the worst possible value would be zero.
The PPV can also be computed from sensitivity, specificity, and the prevalence of the condition:
${\text{PPV}}={\frac {{\text{sensitivity}}\times {\text{prevalence}}}{{\text{sensitivity}}\times {\text{prevalence}}+(1-{\text{specificity}})\times (1-{\text{prevalence}})}}$
cf. Bayes' theorem
The complement of the PPV is the false discovery rate (FDR):
${\text{FDR}}=1-{\text{PPV}}={\frac {\text{Number of false positives}}{{\text{Number of true positives}}+{\text{Number of false positives}}}}={\frac {\text{Number of false positives}}{\text{Number of positive calls}}}$
Negative predictive value (NPV)
The negative predictive value is defined as:
${\text{NPV}}={\frac {\text{Number of true negatives}}{{\text{Number of true negatives}}+{\text{Number of false negatives}}}}={\frac {\text{Number of true negatives}}{\text{Number of negative calls}}}$
where a "true negative" is the event that the test makes a negative prediction, and the subject has a negative result under the gold standard, and a "false negative" is the event that the test makes a negative prediction, and the subject has a positive result under the gold standard. With a perfect test, one which returns no false negatives, the value of the NPV is 1 (100%), and with a test which returns no true negatives the NPV value is zero.
The NPV can also be computed from sensitivity, specificity, and prevalence:
${\text{NPV}}={\frac {{\text{specificity}}\times (1-{\text{prevalence}})}{{\text{specificity}}\times (1-{\text{prevalence}})+(1-{\text{sensitivity}})\times {\text{prevalence}}}}$
${\text{NPV}}={\frac {TN}{TN+FN}}$
The complement of the NPV is the false omission rate (FOR):
${\text{FOR}}=1-{\text{NPV}}={\frac {\text{Number of false negatives}}{{\text{Number of true negatives}}+{\text{Number of false negatives}}}}={\frac {\text{Number of false negatives}}{\text{Number of negative calls}}}$
Although sometimes used synonymously, a negative predictive value generally refers to what is established by control groups, while a negative post-test probability rather refers to a probability for an individual. Still, if the individual's pre-test probability of the target condition is the same as the prevalence in the control group used to establish the negative predictive value, then the two are numerically equal.
Relationship
The following diagram illustrates how the positive predictive value, negative predictive value, sensitivity, and specificity are related.
Predicted condition Sources: [3][4][5][6][7][8][9][10][11]
Total population
= P + N
Positive (PP) Negative (PN) Informedness, bookmaker informedness (BM)
= TPR + TNR − 1
Prevalence threshold (PT)
=${\mathsf {\tfrac {{\sqrt {{\text{TPR}}\times {\text{FPR}}}}-{\text{FPR}}}{{\text{TPR}}-{\text{FPR}}}}}$
Actual condition
Positive (P) True positive (TP),
hit
False negative (FN),
type II error, miss,
underestimation
True positive rate (TPR), recall, sensitivity (SEN), probability of detection, hit rate, power
= TP/P = 1 − FNR
False negative rate (FNR),
miss rate
= FN/P = 1 − TPR
Negative (N) False positive (FP),
type I error, false alarm,
overestimation
True negative (TN),
correct rejection
False positive rate (FPR),
probability of false alarm, fall-out
= FP/N = 1 − TNR
True negative rate (TNR),
specificity (SPC), selectivity
= TN/N = 1 − FPR
Prevalence
= P/P + N
Positive predictive value (PPV), precision
= TP/PP = 1 − FDR
False omission rate (FOR)
= FN/PN = 1 − NPV
Positive likelihood ratio (LR+)
= TPR/FPR
Negative likelihood ratio (LR−)
= FNR/TNR
Accuracy (ACC) = TP + TN/P + N False discovery rate (FDR)
= FP/PP = 1 − PPV
Negative predictive value (NPV) = TN/PN = 1 − FOR Markedness (MK), deltaP (Δp)
= PPV + NPV − 1
Diagnostic odds ratio (DOR) = LR+/LR−
Balanced accuracy (BA) = TPR + TNR/2 F1 score
= 2 PPV × TPR/PPV + TPR = 2 TP/2 TP + FP + FN
Fowlkes–Mallows index (FM) = $\scriptstyle {\mathsf {\sqrt {{\text{PPV}}\times {\text{TPR}}}}}$ Matthews correlation coefficient (MCC)
=$\scriptstyle {\mathsf {\sqrt {{\text{TPR}}\times {\text{TNR}}\times {\text{PPV}}\times {\text{NPV}}}}}$$\scriptstyle -{\mathsf {\sqrt {{\text{FNR}}\times {\text{FPR}}\times {\text{FOR}}\times {\text{FDR}}}}}$
Threat score (TS), critical success index (CSI), Jaccard index = TP/TP + FN + FP
Note that the positive and negative predictive values can only be estimated using data from a cross-sectional study or other population-based study in which valid prevalence estimates may be obtained. In contrast, the sensitivity and specificity can be estimated from case-control studies.
Worked example
Suppose the fecal occult blood (FOB) screen test is used in 2030 people to look for bowel cancer:
Fecal occult blood screen test outcome
Total population
(pop.) = 2030
Test outcome positive Test outcome negative Accuracy (ACC)
= (TP + TN) / pop.
= (20 + 1820) / 2030
≈ 90.64%
F1 score
= 2 × precision × recall/precision + recall
≈ 0.174
Patients with
bowel cancer
(as confirmed
on endoscopy)
Actual
condition
positive
True positive (TP)
= 20
(2030 × 1.48% × 67%)
False negative (FN)
= 10
(2030 × 1.48% × (100% − 67%))
True positive rate (TPR), recall, sensitivity
= TP / (TP + FN)
= 20 / (20 + 10)
≈ 66.7%
False negative rate (FNR), miss rate
= FN / (TP + FN)
= 10 / (20 + 10)
≈ 33.3%
Actual
condition
negative
False positive (FP)
= 180
(2030 × (100% − 1.48%) × (100% − 91%))
True negative (TN)
= 1820
(2030 × (100% − 1.48%) × 91%)
False positive rate (FPR), fall-out, probability of false alarm
= FP / (FP + TN)
= 180 / (180 + 1820)
= 9.0%
Specificity, selectivity, true negative rate (TNR)
= TN / (FP + TN)
= 1820 / (180 + 1820)
= 91%
Prevalence
= (TP + FN) / pop.
= (20 + 10) / 2030
≈ 1.48%
Positive predictive value (PPV), precision
= TP / (TP + FP)
= 20 / (20 + 180)
= 10%
False omission rate (FOR)
= FN / (FN + TN)
= 10 / (10 + 1820)
≈ 0.55%
Positive likelihood ratio (LR+)
= TPR/FPR
= (20 / 30) / (180 / 2000)
≈ 7.41
Negative likelihood ratio (LR−)
= FNR/TNR
= (10 / 30) / (1820 / 2000)
≈ 0.366
False discovery rate (FDR)
= FP / (TP + FP)
= 180 / (20 + 180)
= 90.0%
Negative predictive value (NPV)
= TN / (FN + TN)
= 1820 / (10 + 1820)
≈ 99.45%
Diagnostic odds ratio (DOR)
= LR+/LR−
≈ 20.2
The small positive predictive value (PPV = 10%) indicates that many of the positive results from this testing procedure are false positives. Thus it will be necessary to follow up any positive result with a more reliable test to obtain a more accurate assessment as to whether cancer is present. Nevertheless, such a test may be useful if it is inexpensive and convenient. The strength of the FOB screen test is instead in its negative predictive value — which, if negative for an individual, gives us a high confidence that its negative result is true.
Problems
Other individual factors
Note that the PPV is not intrinsic to the test—it depends also on the prevalence.[2] Due to the large effect of prevalence upon predictive values, a standardized approach has been proposed, where the PPV is normalized to a prevalence of 50%.[12] PPV is directly proportional to the prevalence of the disease or condition. In the above example, if the group of people tested had included a higher proportion of people with bowel cancer, then the PPV would probably come out higher and the NPV lower. If everybody in the group had bowel cancer, the PPV would be 100% and the NPV 0%.
To overcome this problem, NPV and PPV should only be used if the ratio of the number of patients in the disease group and the number of patients in the healthy control group used to establish the NPV and PPV is equivalent to the prevalence of the diseases in the studied population, or, in case two disease groups are compared, if the ratio of the number of patients in disease group 1 and the number of patients in disease group 2 is equivalent to the ratio of the prevalences of the two diseases studied. Otherwise, positive and negative likelihood ratios are more accurate than NPV and PPV, because likelihood ratios do not depend on prevalence.
When an individual being tested has a different pre-test probability of having a condition than the control groups used to establish the PPV and NPV, the PPV and NPV are generally distinguished from the positive and negative post-test probabilities, with the PPV and NPV referring to the ones established by the control groups, and the post-test probabilities referring to the ones for the tested individual (as estimated, for example, by likelihood ratios). Preferably, in such cases, a large group of equivalent individuals should be studied, in order to establish separate positive and negative predictive values for use of the test in such individuals.
Bayesian updating
Bayes' theorem confers inherent limitations on the accuracy of screening tests as a function of disease prevalence or pre-test probability. It has been shown that a testing system can tolerate significant drops in prevalence, up to a certain well-defined point known as the prevalence threshold, below which the reliability of a positive screening test drops precipitously. That said, Balayla et al.[13] showed that sequential testing overcomes the aforementioned Bayesian limitations and thus improves the reliability of screening tests. For a desired positive predictive value $\rho $ that approaches some constant $k$, the number of positive test iterations $n_{i}$ needed is:
$n_{i}=\lim _{\rho \to k}\left\lceil {\frac {\ln \left[{\frac {\rho (\phi -1)}{\phi (\rho -1)}}\right]}{\ln \left[{\frac {a}{1-b}}\right]}}\right\rceil $
where
• $\rho $ is the desired PPV
• $n_{i}$ is the number of testing iterations necessary to achieve $\rho $
• $a$ is the sensitivity
• $b$ is the specificity
• $\phi $ is disease prevalence, and
• $k$ is a constant.
Of note, the denominator of the above equation is the natural logarithm of the positive likelihood ratio (LR+).
Different target conditions
PPV is used to indicate the probability that in case of a positive test, that the patient really has the specified disease. However, there may be more than one cause for a disease and any single potential cause may not always result in the overt disease seen in a patient. There is potential to mix up related target conditions of PPV and NPV, such as interpreting the PPV or NPV of a test as having a disease, when that PPV or NPV value actually refers only to a predisposition of having that disease.
An example is the microbiological throat swab used in patients with a sore throat. Usually publications stating PPV of a throat swab are reporting on the probability that this bacterium is present in the throat, rather than that the patient is ill from the bacteria found. If presence of this bacterium always resulted in a sore throat, then the PPV would be very useful. However the bacteria may colonise individuals in a harmless way and never result in infection or disease. Sore throats occurring in these individuals are caused by other agents such as a virus. In this situation the gold standard used in the evaluation study represents only the presence of bacteria (that might be harmless) but not a causal bacterial sore throat illness. It can be proven that this problem will affect positive predictive value far more than negative predictive value.[14] To evaluate diagnostic tests where the gold standard looks only at potential causes of disease, one may use an extension of the predictive value termed the Etiologic Predictive Value.[15][16]
See also
• Binary classification
• Sensitivity and specificity
• False discovery rate
• Relevance (information retrieval)
• Receiver-operator characteristic
• Diagnostic odds ratio
• Sensitivity index
References
1. Fletcher, Robert H. Fletcher ; Suzanne W. (2005). Clinical epidemiology : the essentials (4th ed.). Baltimore, Md.: Lippincott Williams & Wilkins. pp. 45. ISBN 0-7817-5215-9.{{cite book}}: CS1 maint: multiple names: authors list (link)
2. Altman, DG; Bland, JM (1994). "Diagnostic tests 2: Predictive values". BMJ. 309 (6947): 102. doi:10.1136/bmj.309.6947.102. PMC 2540558. PMID 8038641.
3. Balayla, Jacques (2020). "Prevalence threshold (ϕe) and the geometry of screening curves". PLoS One. 15 (10). doi:10.1371/journal.pone.0240215.
4. Fawcett, Tom (2006). "An Introduction to ROC Analysis" (PDF). Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010.
5. Piryonesi S. Madeh; El-Diraby Tamer E. (2020-03-01). "Data Analytics in Asset Management: Cost-Effective Prediction of the Pavement Condition Index". Journal of Infrastructure Systems. 26 (1): 04019036. doi:10.1061/(ASCE)IS.1943-555X.0000512.
6. Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63.
7. Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer. doi:10.1007/978-0-387-30164-8. ISBN 978-0-387-30164-8.
8. Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson, David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research". Collaboration for Australian Weather and Climate Research. World Meteorological Organisation. Retrieved 2019-07-17.
9. Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.
10. Chicco D, Toetsch N, Jurman G (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioData Mining. 14 (13): 1-22. doi:10.1186/s13040-021-00244-z. PMC 7863449. PMID 33541410.
11. Tharwat A. (August 2018). "Classification assessment methods". Applied Computing and Informatics. doi:10.1016/j.aci.2018.08.003.
12. Heston, Thomas F. (2011). "Standardizing predictive values in diagnostic imaging research". Journal of Magnetic Resonance Imaging. 33 (2): 505, author reply 506–7. doi:10.1002/jmri.22466. PMID 21274995.
13. Jacques Balayla. Bayesian Updating and Sequential Testing: Overcoming Inferential Limitations of Screening Tests. ArXiv 2020. https://arxiv.org/abs/2006.11641.
14. Orda, Ulrich; Gunnarsson, Ronny K; Orda, Sabine; Fitzgerald, Mark; Rofe, Geoffry; Dargan, Anna (2016). "Etiologic predictive value of a rapid immunoassay for the detection of group A Streptococcus antigen from throat swabs in patients presenting with a sore throat" (PDF). International Journal of Infectious Diseases. 45 (April): 32–5. doi:10.1016/j.ijid.2016.02.002. PMID 26873279.
15. Gunnarsson, Ronny K.; Lanke, Jan (2002). "The predictive value of microbiologic diagnostic tests if asymptomatic carriers are present". Statistics in Medicine. 21 (12): 1773–85. doi:10.1002/sim.1119. PMID 12111911. S2CID 26163122.
16. Gunnarsson, Ronny K. "EPV Calculator". Science Network TV.
| Wikipedia |
Quasireversibility
In queueing theory, a discipline within the mathematical theory of probability, quasireversibility (sometimes QR) is a property of some queues. The concept was first identified by Richard R. Muntz[1] and further developed by Frank Kelly.[2][3] Quasireversibility differs from reversibility in that a stronger condition is imposed on arrival rates and a weaker condition is applied on probability fluxes. For example, an M/M/1 queue with state-dependent arrival rates and state-dependent service times is reversible, but not quasireversible.[4]
A network of queues, such that each individual queue when considered in isolation is quasireversible, always has a product form stationary distribution.[5] Quasireversibility had been conjectured to be a necessary condition for a product form solution in a queueing network, but this was shown not to be the case. Chao et al. exhibited a product form network where quasireversibility was not satisfied.[6]
Definition
A queue with stationary distribution $\pi $ is quasireversible if its state at time t, x(t) is independent of
• the arrival times for each class of customer subsequent to time t,
• the departure times for each class of customer prior to time t
for all classes of customer.[7]
Partial balance formulation
Quasireversibility is equivalent to a particular form of partial balance. First, define the reversed rates q'(x,x') by
$\pi (\mathbf {x} )q'(\mathbf {x} ,\mathbf {x'} )=\pi (\mathbf {x'} )q(\mathbf {x'} ,\mathbf {x} )$
then considering just customers of a particular class, the arrival and departure processes are the same Poisson process (with parameter $\alpha $), so
$\alpha =\sum _{\mathbf {x'} \in M_{\mathbf {x} }}q(\mathbf {x} ,\mathbf {x'} )=\sum _{\mathbf {x'} \in M_{\mathbf {x} }}q'(\mathbf {x} ,\mathbf {x'} )$
where Mx is a set such that $\scriptstyle {\mathbf {x'} \in M_{\mathbf {x} }}$ means the state x' represents a single arrival of the particular class of customer to state x.
Examples
• Burke's theorem shows that an M/M/m queueing system is quasireversible.[8][9][10]
• Kelly showed that each station of a BCMP network is quasireversible when viewed in isolation.[11]
• G-queues in G-networks are quasireversible.[12]
See also
• Time reversibility
References
1. Muntz, R.R. (1972). Poisson departure process and queueing networks (IBM Research Report RC 4145) (Technical report). Yorktown Heights, N.Y.: IBM Thomas J. Watson Research Center.
2. Kelly, F. P. (1975). "Networks of Queues with Customers of Different Types". Journal of Applied Probability. 12 (3): 542–554. doi:10.2307/3212869. JSTOR 3212869. S2CID 51917794.
3. Kelly, F. P. (1976). "Networks of Queues". Advances in Applied Probability. 8 (2): 416–432. doi:10.2307/1425912. JSTOR 1425912. S2CID 204177645.
4. Harrison, Peter G.; Patel, Naresh M. (1992). Performance Modelling of Communication Networks and Computer Architectures. Addison-Wesley. p. 288. ISBN 0-201-54419-9.
5. Kelly, F.P. (1982). Networks of quasireversible nodes. In Applied Probability and Computer Science: The Interface (Ralph L. Disney and Teunis J. Ott, editors.) 1 3-29. Birkhäuser, Boston
6. Chao, X.; Miyazawa, M.; Serfozo, R. F.; Takada, H. (1998). "Markov network processes with product form stationary distributions". Queueing Systems. 28 (4): 377. doi:10.1023/A:1019115626557. S2CID 14471818.
7. Kelly, F.P., Reversibility and Stochastic Networks, 1978 pages 66-67
8. Burke, P. J. (1956). "The Output of a Queuing System". Operations Research. 4 (6): 699–704. doi:10.1287/opre.4.6.699. S2CID 55089958.
9. Burke, P. J. (1968). "The Output Process of a Stationary M/M/s Queueing System". The Annals of Mathematical Statistics. 39 (4): 1144–1152. doi:10.1214/aoms/1177698238.
10. O'Connell, N.; Yor, M. (December 2001). "Brownian analogues of Burke's theorem". Stochastic Processes and Their Applications. 96 (2): 285–298. doi:10.1016/S0304-4149(01)00119-3.
11. Kelly, F.P. (1979). Reversibility and Stochastic Networks. New York: Wiley.
12. Dao-Thi, T. H.; Mairesse, J. (2005). "Zero-Automatic Queues". Formal Techniques for Computer Systems and Business Processes. Lecture Notes in Computer Science. Vol. 3670. p. 64. doi:10.1007/11549970_6. ISBN 978-3-540-28701-8.
Queueing theory
Single queueing nodes
• D/M/1 queue
• M/D/1 queue
• M/D/c queue
• M/M/1 queue
• Burke's theorem
• M/M/c queue
• M/M/∞ queue
• M/G/1 queue
• Pollaczek–Khinchine formula
• Matrix analytic method
• M/G/k queue
• G/M/1 queue
• G/G/1 queue
• Kingman's formula
• Lindley equation
• Fork–join queue
• Bulk queue
Arrival processes
• Poisson point process
• Markovian arrival process
• Rational arrival process
Queueing networks
• Jackson network
• Traffic equations
• Gordon–Newell theorem
• Mean value analysis
• Buzen's algorithm
• Kelly network
• G-network
• BCMP network
Service policies
• FIFO
• LIFO
• Processor sharing
• Round-robin
• Shortest job next
• Shortest remaining time
Key concepts
• Continuous-time Markov chain
• Kendall's notation
• Little's law
• Product-form solution
• Balance equation
• Quasireversibility
• Flow-equivalent server method
• Arrival theorem
• Decomposition method
• Beneš method
Limit theorems
• Fluid limit
• Mean-field theory
• Heavy traffic approximation
• Reflected Brownian motion
Extensions
• Fluid queue
• Layered queueing network
• Polling system
• Adversarial queueing network
• Loss network
• Retrial queue
Information systems
• Data buffer
• Erlang (unit)
• Erlang distribution
• Flow control (data)
• Message queue
• Network congestion
• Network scheduler
• Pipeline (software)
• Quality of service
• Scheduling (computing)
• Teletraffic engineering
Category
| Wikipedia |
Radian
The radian, denoted by the symbol rad, is the unit of angle in the International System of Units (SI) and is the standard unit of angular measure used in many areas of mathematics. It is defined such that one radian is the angle subtended at the centre of a circle by an arc that is equal in length to the radius.[2] The unit was formerly an SI supplementary unit and is currently a dimensionless SI derived unit,[2] defined in the SI as 1 rad = 1[3] and expressed in terms of the SI base unit metre (m) as rad = m/m.[4] Angles without explicitly specified units are generally assumed to be measured in radians, especially in mathematical writing.[5]
Radian
An arc of a circle with the same length as the radius of that circle subtends an angle of 1 radian. The circumference subtends an angle of 2π radians.
General information
Unit systemSI
Unit ofangle
Symbolrad, R[1]
Conversions
1 rad in ...... is equal to ...
milliradians 1000 mrad
turns 1/2π turn
degrees 180/π° ≈ 57.296°
gradians 200/π grad ≈ 63.662g
Definition
One radian is defined as the angle subtended from the center of a circle which intercepts an arc equal in length to the radius of the circle.[6] More generally, the magnitude in radians of a subtended angle is equal to the ratio of the arc length to the radius of the circle; that is, $\theta ={\frac {s}{r}}$, where θ is the subtended angle in radians, s is arc length, and r is radius. A right angle is exactly ${\frac {\pi }{2}}$ radians.[7]
The rotation angle (360°) corresponding to one complete revolution is the length of the circumference divided by the radius, which is ${\frac {2\pi r}{r}}$, or 2π. Thus, 2π radians is equal to 360 degrees.
The relation 2π rad = 360° can be derived using the formula for arc length, $ \ell _{\text{arc}}=2\pi r\left({\tfrac {\theta }{360^{\circ }}}\right)$. Since radian is the measure of an angle that is subtended by an arc of a length equal to the radius of the circle, $ 1=2\pi \left({\tfrac {1{\text{ rad}}}{360^{\circ }}}\right)$. This can be further simplified to $ 1={\tfrac {2\pi {\text{ rad}}}{360^{\circ }}}$. Multiplying both sides by 360° gives 360° = 2π rad.
Unit symbol
The International Bureau of Weights and Measures[7] and International Organization for Standardization[8] specify rad as the symbol for the radian. Alternative symbols that were in use in 1909 are c (the superscript letter c, for "circular measure"), the letter r, or a superscript R,[1] but these variants are infrequently used, as they may be mistaken for a degree symbol (°) or a radius (r). Hence an angle of 1.2 radians would be written today as 1.2 rad; archaic notations could include 1.2 r, 1.2rad, 1.2c, or 1.2R.
In mathematical writing, the symbol "rad" is often omitted. When quantifying an angle in the absence of any symbol, radians are assumed, and when degrees are meant, the degree sign ° is used.
Dimensional analysis
Plane angle is defined as θ = s/r, where θ is the subtended angle in radians, s is arc length, and r is radius. One radian corresponds to the angle for which s = r, hence 1 radian = 1 m/m.[9] However, rad is only to be used to express angles, not to express ratios of lengths in general.[7] A similar calculation using the area of a circular sector θ = 2A/r2 gives 1 radian as 1 m2/m2.[10] The key fact is that the radian is a dimensionless unit equal to 1. In SI 2019, the radian is defined accordingly as 1 rad = 1.[11] It is a long-established practice in mathematics and across all areas of science to make use of rad = 1.[4][12] In 1993 the American Association of Physics Teachers Metric Committee specified that the radian should explicitly appear in quantities only when different numerical values would be obtained when other angle measures were used, such as in the quantities of angle measure (rad), angular speed (rad/s), angular acceleration (rad/s2), and torsional stiffness (N⋅m/rad), and not in the quantities of torque (N⋅m) and angular momentum (kg⋅m2/s).[13]
Giacomo Prando says "the current state of affairs leads inevitably to ghostly appearances and disappearances of the radian in the dimensional analysis of physical equations".[14] For example, an object hanging by a string from a pulley will rise or drop by y = rθ centimeters, where r is the radius of the pulley in centimeters and θ is the angle the pulley turns in radians. When multiplying r by θ the unit of radians disappears from the result. Similarly in the formula for the angular velocity of a rolling wheel, ω = v/r, radians appear in the units of ω but not on the right hand side.[15] Anthony French calls this phenomenon "a perennial problem in the teaching of mechanics".[16] Oberhofer says that the typical advice of ignoring radians during dimensional analysis and adding or removing radians in units according to convention and contextual knowledge is "pedagogically unsatisfying".[17]
At least a dozen scientists between 1936 and 2022 have made proposals to treat the radian as a base unit of measure defining its own dimension of "angle".[18][19][20] Quincey's review of proposals outlines two classes of proposal. The first option changes the unit of a radius to meters per radian, but this is incompatible with dimensional analysis for the area of a circle, πr2. The other option is to introduce a dimensional constant. According to Quincey this approach is "logically rigorous" compared to SI, but requires "the modification of many familiar mathematical and physical equations".[21]
In particular, Quincey identifies Torrens' proposal to introduce a constant η equal to 1 inverse radian (1 rad−1) in a fashion similar to the introduction of the constant ε0.[21][lower-alpha 1] With this change the formula for the angle subtended at the center of a circle, s = rθ, is modified to become s = ηrθ, and the Taylor series for the sine of an angle θ becomes:[20][22]
$\operatorname {Sin} \theta =\sin _{\text{rad}}(\eta \theta )=\eta \theta -{\frac {(\eta \theta )^{3}}{3!}}+{\frac {(\eta \theta )^{5}}{5!}}-{\frac {(\eta \theta )^{7}}{7!}}+\cdots .$
The capitalized function Sin is the "complete" function that takes an argument with a dimension of angle and is independent of the units expressed,[22] while sinrad is the traditional function on pure numbers which assumes its argument is in radians.[23] $\operatorname {Sin} $ can be denoted $\sin $ if it is clear that the complete form is meant.[20][24]
SI can be considered relative to this framework as a natural unit system where the equation η = 1 is assumed to hold, or similarly, 1 rad = 1. This radian convention allows the omission of η in mathematical formulas.[25]
A dimensional constant for angle is "rather strange" and the difficulty of modifying equations to add the dimensional constant is likely to preclude widespread use.[20] Defining radian as a base unit may be useful for software, where the disadvantage of longer equations is minimal.[26] For example, the Boost units library defines angle units with a plane_angle dimension,[27] and Mathematica's unit system similarly considers angles to have an angle dimension.[28][29]
Conversions
Conversion of common angles
Turns Radians Degrees Gradians
0 turn 0 rad 0° 0g
1/72 turn π/36 rad 5° 5+5/9g
1/24 turn π/12 rad 15° 16+2/3g
1/16 turn π/8 rad 22.5° 25g
1/12 turn π/6 rad 30° 33+1/3g
1/10 turn π/5 rad 36° 40g
1/8 turn π/4 rad 45° 50g
1/2π turn 1 rad approx.
57.3°
approx.
63.7g
1/6 turn π/3 rad 60° 66+2/3g
1/5 turn 2π/5 rad 72° 80g
1/4 turn π/2 rad 90° 100g
1/3 turn 2π/3 rad 120° 133+1/3g
2/5 turn 4π/5 rad 144° 160g
1/2 turn π rad 180° 200g
3/4 turn 3π/2 rad 270° 300g
1 turn 2π rad 360° 400g
Between degrees
As stated, one radian is equal to ${180^{\circ }}/{\pi }$. Thus, to convert from radians to degrees, multiply by ${180^{\circ }}/{\pi }$.
${\text{angle in degrees}}={\text{angle in radians}}\cdot {\frac {180^{\circ }}{\pi }}$
For example:
$1{\text{ rad}}=1\cdot {\frac {180^{\circ }}{\pi }}\approx 57.2958^{\circ }$
$2.5{\text{ rad}}=2.5\cdot {\frac {180^{\circ }}{\pi }}\approx 143.2394^{\circ }$
${\frac {\pi }{3}}{\text{ rad}}={\frac {\pi }{3}}\cdot {\frac {180^{\circ }}{\pi }}=60^{\circ }$
Conversely, to convert from degrees to radians, multiply by ${\pi }/{180^{\circ }}$.
${\text{angle in radians}}={\text{angle in degrees}}\cdot {\frac {\pi }{180^{\circ }}}$
For example:
$1^{\circ }=1^{\circ }\cdot {\frac {\pi }{180^{\circ }}}\approx 0.0175{\text{ rad}}$
$23^{\circ }=23^{\circ }\cdot {\frac {\pi }{180^{\circ }}}\approx 0.4014{\text{ rad}}$
Radians can be converted to turns (one turn is the angle corresponding to a revolution) by dividing the number of radians by 2π.
Between gradians
$2\pi $ radians equals one turn, which is by definition 400 gradians (400 gons or 400g). To convert from radians to gradians multiply by $200^{\text{g}}/\pi $, and to convert from gradians to radians multiply by $\pi /200^{\text{g}}$. For example,
$1.2{\text{ rad}}=1.2\cdot {\frac {200^{\text{g}}}{\pi }}\approx 76.3944^{\text{g}}$
$50^{\text{g}}=50^{\text{g}}\cdot {\frac {\pi }{200^{\text{g}}}}\approx 0.7854{\text{ rad}}$
Usage
Mathematics
In calculus and most other branches of mathematics beyond practical geometry, angles are measured in radians. This is because radians have a mathematical naturalness that leads to a more elegant formulation of some important results.
Results in analysis involving trigonometric functions can be elegantly stated when the functions' arguments are expressed in radians. For example, the use of radians leads to the simple limit formula
$\lim _{h\rightarrow 0}{\frac {\sin h}{h}}=1,$
which is the basis of many other identities in mathematics, including
${\frac {d}{dx}}\sin x=\cos x$
${\frac {d^{2}}{dx^{2}}}\sin x=-\sin x.$
Because of these and other properties, the trigonometric functions appear in solutions to mathematical problems that are not obviously related to the functions' geometrical meanings (for example, the solutions to the differential equation ${\tfrac {d^{2}y}{dx^{2}}}=-y$, the evaluation of the integral $\textstyle \int {\frac {dx}{1+x^{2}}},$ and so on). In all such cases, it is found that the arguments to the functions are most naturally written in the form that corresponds, in geometrical contexts, to the radian measurement of angles.
The trigonometric functions also have simple and elegant series expansions when radians are used. For example, when x is in radians, the Taylor series for sin x becomes:
$\sin x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots .$
If x were expressed in degrees, then the series would contain messy factors involving powers of π/180: if x is the number of degrees, the number of radians is y = πx / 180, so
$\sin x_{\mathrm {deg} }=\sin y_{\mathrm {rad} }={\frac {\pi }{180}}x-\left({\frac {\pi }{180}}\right)^{3}\ {\frac {x^{3}}{3!}}+\left({\frac {\pi }{180}}\right)^{5}\ {\frac {x^{5}}{5!}}-\left({\frac {\pi }{180}}\right)^{7}\ {\frac {x^{7}}{7!}}+\cdots .$
In a similar spirit, mathematically important relationships between the sine and cosine functions and the exponential function (see, for example, Euler's formula) can be elegantly stated, when the functions' arguments are in radians (and messy otherwise).
Physics
The radian is widely used in physics when angular measurements are required. For example, angular velocity is typically expressed in the unit radian per second (rad/s). One revolution per second corresponds to 2π radians per second.
Similarly, the unit used for angular acceleration is often radian per second per second (rad/s2).
For the purpose of dimensional analysis, the units of angular velocity and angular acceleration are s−1 and s−2 respectively.
Likewise, the phase difference of two waves can also be expressed using the radian as the unit. For example, if the phase difference of two waves is (n⋅2π) radians with n is an integer, they are considered to be in phase, whilst if the phase difference of two waves is (n⋅2π + π) with n an integer, they are considered to be in antiphase.
Prefixes and variants
Metric prefixes for submultiples are used with radians. A milliradian (mrad) is a thousandth of a radian (0.001 rad), i.e. 1 rad = 103 mrad. There are 2π × 1000 milliradians (≈ 6283.185 mrad) in a circle. So a milliradian is just under 1/6283 of the angle subtended by a full circle. This unit of angular measurement of a circle is in common use by telescopic sight manufacturers using (stadiametric) rangefinding in reticles. The divergence of laser beams is also usually measured in milliradians.
The angular mil is an approximation of the milliradian used by NATO and other military organizations in gunnery and targeting. Each angular mil represents 1/6400 of a circle and is 15/8% or 1.875% smaller than the milliradian. For the small angles typically found in targeting work, the convenience of using the number 6400 in calculation outweighs the small mathematical errors it introduces. In the past, other gunnery systems have used different approximations to 1/2000π; for example Sweden used the 1/6300 streck and the USSR used 1/6000. Being based on the milliradian, the NATO mil subtends roughly 1 m at a range of 1000 m (at such small angles, the curvature is negligible).
Prefixes smaller than milli- are useful in measuring extremely small angles. Microradians (μrad, 10−6 rad) and nanoradians (nrad, 10−9 rad) are used in astronomy, and can also be used to measure the beam quality of lasers with ultra-low divergence. More common is the arc second, which is π/648,000 rad (around 4.8481 microradians).
SI multiples of radian (rad)
Submultiples Multiples
Value SI symbol Name Value SI symbol Name
10−1 rad drad deciradian 101 rad darad decaradian
10−2 rad crad centiradian 102 rad hrad hectoradian
10−3 rad mrad milliradian 103 rad krad kiloradian
10−6 rad µrad microradian 106 rad Mrad megaradian
10−9 rad nrad nanoradian 109 rad Grad gigaradian
10−12 rad prad picoradian 1012 rad Trad teraradian
10−15 rad frad femtoradian 1015 rad Prad petaradian
10−18 rad arad attoradian 1018 rad Erad exaradian
10−21 rad zrad zeptoradian 1021 rad Zrad zettaradian
10−24 rad yrad yoctoradian 1024 rad Yrad yottaradian
10−27 rad rrad rontoradian 1027 rad Rrad ronnaradian
10−30 rad qrad quectoradian 1030 rad Qrad quettaradian
History
Pre-20th century
The idea of measuring angles by the length of the arc was in use by mathematicians quite early. For example, al-Kashi (c. 1400) used so-called diameter parts as units, where one diameter part was 1/60 radian. They also used sexagesimal subunits of the diameter part.[30] Newton in 1672 spoke of "the angular quantity of a body's circular motion", but used it only as a relative measure to develop an astronomical algorithm.[31]
The concept of the radian measure is normally credited to Roger Cotes, who died in 1716. By 1722, his cousin Robert Smith had collected and published Cotes' mathematical writings in a book, Harmonia mensurarum.[32] In a chapter of editorial comments, Smith gave what is probably the first published calculation of one radian in degrees, citing a note of Cotes that has not survived. Smith described the radian in everything but name – "Now this number is equal to 180 degrees as the radius of a circle to the semicircumference, this is as 1 to 3.141592653589" –, and recognized its naturalness as a unit of angular measure.[33][34]
In 1765, Leonhard Euler implicitly adopted the radian as a unit of angle.[31] Specifically, Euler defined angular velocity as "The angular speed in rotational motion is the speed of that point, the distance of which from the axis of gyration is expressed by one."[35] Euler was probably the first to adopt this convention, referred to as the radian convention, which gives the simple formula for angular velocity ω = v/r. As discussed in § Dimensional analysis, the radian convention has been widely adopted, and other conventions have the drawback of requiring a dimensional constant, for example ω = v/(ηr).[25]
Prior to the term radian becoming widespread, the unit was commonly called circular measure of an angle.[36] The term radian first appeared in print on 5 June 1873, in examination questions set by James Thomson (brother of Lord Kelvin) at Queen's College, Belfast. He had used the term as early as 1871, while in 1869, Thomas Muir, then of the University of St Andrews, vacillated between the terms rad, radial, and radian. In 1874, after a consultation with James Thomson, Muir adopted radian.[37][38][39] The name radian was not universally adopted for some time after this. Longmans' School Trigonometry still called the radian circular measure when published in 1890.[40]
In 1893 Alexander Macfarlane wrote "the true analytical argument for the circular ratios is not the ratio of the arc to the radius, but the ratio of twice the area of a sector to the square on the radius."[41] For some reason the paper was withdrawn from the published proceedings of mathematical congress held in connection with World's Columbian Exposition in Chicago (acknowledged at page 167), and privately published in his Papers on Space Analysis (1894). Macfarlane reached this idea or ratios of areas while considering the basis for hyperbolic angle which is analogously defined.[42]
As a SI unit
As Paul Quincey et al. writes, "the status of angles within the International System of Units (SI) has long been a source of controversy and confusion."[43] In 1960, the CGPM established the SI and the radian was classified as a "supplementary unit" along with the steradian. This special class was officially regarded "either as base units or as derived units", as the CGPM could not reach a decision on whether the radian was a base unit or a derived unit.[44] Richard Nelson writes "This ambiguity [in the classification of the supplemental units] prompted a spirited discussion over their proper interpretation."[45] In May 1980 the Consultative Committee for Units (CCU) considered a proposal for making radians an SI base unit, using a constant α0 = 1 rad,[46][25] but turned it down to avoid an upheaval to current practice.[25]
In October 1980 the CGPM decided that supplementary units were dimensionless derived units for which the CGPM allowed the freedom of using them or not using them in expressions for SI derived units,[45] on the basis that "[no formalism] exists which is at the same time coherent and convenient and in which the quantities plane angle and solid angle might be considered as base quantities" and that "[the possibility of treating the radian and steradian as SI base units] compromises the internal coherence of the SI based on only seven base units".[47] In 1995 the CGPM eliminated the class of supplementary units and defined the radian and the steradian as "dimensionless derived units, the names and symbols of which may, but need not, be used in expressions for other SI derived units, as is convenient".[48] Mikhail Kalinin writing in 2019 has criticized the 1980 CGPM decision as "unfounded" and says that the 1995 CGPM decision used inconsistent arguments and introduced "numerous discrepancies, inconsistencies, and contradictions in the wordings of the SI".[49]
At the 2013 meeting of the CCU, Peter Mohr gave a presentation on alleged inconsistencies arising from defining the radian as a dimensionless unit rather than a base unit. CCU President Ian M. Mills declared this to be a "formidable problem" and the CCU Working Group on Angles and Dimensionless Quantities in the SI was established.[50] The CCU met most recently in 2021, but did not reach a consensus. A small number of members argued strongly that the radian should be a base unit, but the majority felt the status quo was acceptable or that the change would cause more problems than it would solve. A task group was established to "review the historical use of SI supplementary units and consider whether reintroduction would be of benefit", among other activities.[51][52]
See also
• Angular frequency
• Minute and second of arc
• Steradian, a higher-dimensional analog of the radian which measures solid angle
• Trigonometry
Notes
1. Other proposals include the abbreviation "rad" (Brinsmade 1936), the notation $\langle \theta \rangle $ (Romain 1962), and the constants ם (Brownstein 1997), ◁ (Lévy-Leblond 1998), k (Foster 2010), θC (Quincey 2021), and ${\cal {C}}={\frac {2\pi }{\Theta }}$ (Mohr et al. 2022).
References
1. Hall, Arthur Graham; Frink, Fred Goodrich (January 1909). "Chapter VII. The General Angle [55] Signs and Limitations in Value. Exercise XV.". Written at Ann Arbor, Michigan, USA. Trigonometry. Vol. Part I: Plane Trigonometry. New York, USA: Henry Holt and Company / Norwood Press / J. S. Cushing Co. - Berwick & Smith Co., Norwood, Massachusetts, USA. p. 73. Retrieved 2017-08-12.
2. International Bureau of Weights and Measures 2019, p. 151: "The CGPM decided to interpret the supplementary units in the SI, namely the radian and the steradian, as dimensionless derived units."
3. International Bureau of Weights and Measures 2019, p. 151: "One radian corresponds to the angle for which s = r, thus 1 rad = 1."
4. International Bureau of Weights and Measures 2019, p. 137.
5. Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 3. National Aeronautics and Space Administration, Goddard Space Flight Center. 2002. p. 12.
6. Protter, Murray H.; Morrey, Charles B. Jr. (1970), College Calculus with Analytic Geometry (2nd ed.), Reading: Addison-Wesley, p. APP-4, LCCN 76087042
7. International Bureau of Weights and Measures 2019, p. 151.
8. "ISO 80000-3:2006 Quantities and Units - Space and Time". 17 January 2017.
9. International Bureau of Weights and Measures 2019, p. 151: "One radian corresponds to the angle for which s = r"
10. Quincey 2016, p. 844: "Also, as alluded to in Mohr & Phillips 2015, the radian can be defined in terms of the area A of a sector (A = 1/2 θ r2), in which case it has the units m2⋅m−2."
11. International Bureau of Weights and Measures 2019, p. 151: "One radian corresponds to the angle for which s = r, thus 1 rad = 1."
12. Bridgman, Percy Williams (1922). Dimensional analysis. New Haven : Yale University Press. Angular amplitude of swing [...] No dimensions.
13. Aubrecht, Gordon J.; French, Anthony P.; Iona, Mario; Welch, Daniel W. (February 1993). "The radian—That troublesome unit". The Physics Teacher. 31 (2): 84–87. Bibcode:1993PhTea..31...84A. doi:10.1119/1.2343667.
14. Prando, Giacomo (August 2020). "A spectral unit". Nature Physics. 16 (8): 888. Bibcode:2020NatPh..16..888P. doi:10.1038/s41567-020-0997-3. S2CID 225445454.
15. Leonard, William J. (1999). Minds-on Physics: Advanced topics in mechanics. Kendall Hunt. p. 262. ISBN 978-0-7872-5412-4.
16. French, Anthony P. (May 1992). "What happens to the 'radians'? (comment)". The Physics Teacher. 30 (5): 260–261. doi:10.1119/1.2343535.
17. Oberhofer, E. S. (March 1992). "What happens to the 'radians'?". The Physics Teacher. 30 (3): 170–171. Bibcode:1992PhTea..30..170O. doi:10.1119/1.2343500.
18. Brinsmade 1936; Romain 1962; Eder 1982; Torrens 1986; Brownstein 1997; Lévy-Leblond 1998; Foster 2010; Mills 2016; Quincey 2021; Leonard 2021; Mohr et al. 2022
19. Mohr & Phillips 2015.
20. Quincey, Paul; Brown, Richard J C (1 June 2016). "Implications of adopting plane angle as a base quantity in the SI". Metrologia. 53 (3): 998–1002. arXiv:1604.02373. Bibcode:2016Metro..53..998Q. doi:10.1088/0026-1394/53/3/998. S2CID 119294905.
21. Quincey 2016.
22. Torrens 1986.
23. Mohr et al. 2022, p. 6.
24. Mohr et al. 2022, pp. 8–9.
25. Quincey 2021.
26. Quincey, Paul; Brown, Richard J C (1 August 2017). "A clearer approach for defining unit systems". Metrologia. 54 (4): 454–460. arXiv:1705.03765. Bibcode:2017Metro..54..454Q. doi:10.1088/1681-7575/aa7160. S2CID 119418270.
27. Schabel, Matthias C.; Watanabe, Steven. "Boost.Units FAQ – 1.79.0". www.boost.org. Retrieved 5 May 2022. Angles are treated as units
28. Mohr et al. 2022, p. 3.
29. "UnityDimensions—Wolfram Language Documentation". reference.wolfram.com. Retrieved 1 July 2022.
30. Luckey, Paul (1953) [Translation of 1424 book]. Siggel, A. (ed.). Der Lehrbrief über den kreisumfang von Gamshid b. Mas'ud al-Kasi [Treatise on the Circumference of al-Kashi]. Berlin: Akademie Verlag. p. 40.
31. Roche, John J. (21 December 1998). The Mathematics of Measurement: A Critical History. Springer Science & Business Media. p. 134. ISBN 978-0-387-91581-4.
32. O'Connor, J. J.; Robertson, E. F. (February 2005). "Biography of Roger Cotes". The MacTutor History of Mathematics. Archived from the original on 2012-10-19. Retrieved 2006-04-21.
33. Cotes, Roger (1722). "Editoris notæ ad Harmoniam mensurarum". In Smith, Robert (ed.). Harmonia mensurarum (in Latin). Cambridge, England. pp. 94–95. In Canone Logarithmico exhibetur Systema quoddam menfurarum numeralium, quæ Logarithmi dicuntur: atque hujus systematis Modulus is est Logarithmus, qui metitur Rationem Modularem in Corol. 6. definitam. Similiter in Canone Trigonometrico finuum & tangentium, exhibetur Systema quoddam menfurarum numeralium, quæ Gradus appellantur: atque hujus systematis Modulus is est Numerus Graduum, qui metitur Angulum Modularem modo definitun, hoc est, qui continetur in arcu Radio æquali. Eft autem hic Numerus ad Gradus 180 ut Circuli Radius ad Semicircuinferentiam, hoc eft ut 1 ad 3.141592653589 &c. Unde Modulus Canonis Trigonometrici prodibit 57.2957795130 &c. Cujus Reciprocus eft 0.0174532925 &c. Hujus moduli subsidio (quem in chartula quadam Auctoris manu descriptum inveni) commodissime computabis mensuras angulares, queinadmodum oftendam in Nota III. [In the Logarithmic Canon there is presented a certain system of numerical measures called Logarithms: and the Modulus of this system is the Logarithm, which measures the Modular Ratio as defined in Corollary 6. Similarly, in the Trigonometrical Canon of sines and tangents, there is presented a certain system of numerical measures called Degrees: and the Modulus of this system is the Number of Degrees which measures the Modular Angle defined in the manner defined, that is, which is contained in an equal Radius arc. Now this Number is equal to 180 Degrees as the Radius of a Circle to the Semicircumference, this is as 1 to 3.141592653589 &c. Hence the Modulus of the Trigonometric Canon will be 57.2957795130 &c. Whose Reciprocal is 0.0174532925 &c. With the help of this modulus (which I found described in a note in the hand of the Author) you will most conveniently calculate the angular measures, as mentioned in Note III.]
34. Gowing, Ronald (27 June 2002). Roger Cotes - Natural Philosopher. Cambridge University Press. ISBN 978-0-521-52649-4.
35. Euler, Leonhard. Theoria Motus Corporum Solidorum seu Rigidorum [Theory of the motion of solid or rigid bodies] (PDF) (in Latin). Translated by Bruce, Ian. Definition 6, paragraph 316.
36. Isaac Todhunter, Plane Trigonometry: For the Use of Colleges and Schools, p. 10, Cambridge and London: MacMillan, 1864 OCLC 500022958
37. Cajori, Florian (1929). History of Mathematical Notations. Vol. 2. Dover Publications. pp. 147–148. ISBN 0-486-67766-4.
• Muir, Thos. (1910). "The Term "Radian" in Trigonometry". Nature. 83 (2110): 156. Bibcode:1910Natur..83..156M. doi:10.1038/083156a0. S2CID 3958702.
• Thomson, James (1910). "The Term "Radian" in Trigonometry". Nature. 83 (2112): 217. Bibcode:1910Natur..83..217T. doi:10.1038/083217c0. S2CID 3980250.
• Muir, Thos. (1910). "The Term "Radian" in Trigonometry". Nature. 83 (2120): 459–460. Bibcode:1910Natur..83..459M. doi:10.1038/083459d0. S2CID 3971449.
38. Miller, Jeff (Nov 23, 2009). "Earliest Known Uses of Some of the Words of Mathematics". Retrieved Sep 30, 2011.
39. Frederick Sparks, Longmans' School Trigonometry, p. 6, London: Longmans, Green, and Co., 1890 OCLC 877238863 (1891 edition)
40. A. Macfarlane (1893) "On the definitions of the trigonometric functions", page 9, link at Internet Archive
41. Geometry/Unified Angles at Wikibooks
42. Quincey, Paul; Mohr, Peter J; Phillips, William D (1 August 2019). "Angles are inherently neither length ratios nor dimensionless". Metrologia. 56 (4): 043001. arXiv:1909.08389. Bibcode:2019Metro..56d3001Q. doi:10.1088/1681-7575/ab27d7. S2CID 198428043.
43. Le Système international d'unités (PDF) (in French), 1970, p. 12, Pour quelques unités du Système International, la Conférence Générale n'a pas ou n'a pas encore décidé s'il s'agit d'unités de base ou bien d'unités dérivées. [For some units of the SI, the CGPM still hasn't yet decided whether they are base units or derived units.]
44. Nelson, Robert A. (March 1984). "The supplementary units". The Physics Teacher. 22 (3): 188–193. Bibcode:1984PhTea..22..188N. doi:10.1119/1.2341516.
45. Report of the 7th meeting (PDF) (in French), Consultative Committee for Units, May 1980, pp. 6–7
46. International Bureau of Weights and Measures 2019, pp. 174–175.
47. International Bureau of Weights and Measures 2019, p. 179.
48. Kalinin, Mikhail I (1 December 2019). "On the status of plane and solid angles in the International System of Units (SI)". Metrologia. 56 (6): 065009. arXiv:1810.12057. Bibcode:2019Metro..56f5009K. doi:10.1088/1681-7575/ab3fbf. S2CID 53627142.
49. Consultative Committee for Units (11–12 June 2013). Report of the 21st meeting to the International Committee for Weights and Measures (Report). pp. 18–20.
50. Consultative Committee for Units (21–23 September 2021). Report of the 25th meeting to the International Committee for Weights and Measures (Report). pp. 16–17.
51. "CCU Task Group on angle and dimensionless quantities in the SI Brochure (CCU-TG-ADQSIB)". BIPM. Retrieved 26 June 2022.
• International Bureau of Weights and Measures (20 May 2019), The International System of Units (SI) (PDF) (9th ed.), ISBN 978-92-822-2272-0, archived (PDF) from the original on 8 May 2021
• Brinsmade, J. B. (December 1936). "Plane and Solid Angles. Their Pedagogic Value When Introduced Explicitly". American Journal of Physics. 4 (4): 175–179. Bibcode:1936AmJPh...4..175B. doi:10.1119/1.1999110.
• Romain, Jacques E. (July 1962). "Angle as a fourth fundamental quantity". Journal of Research of the National Bureau of Standards Section B. 66B (3): 97. doi:10.6028/jres.066B.012.
• Eder, W E (January 1982). "A Viewpoint on the Quantity "Plane Angle"". Metrologia. 18 (1): 1–12. Bibcode:1982Metro..18....1E. doi:10.1088/0026-1394/18/1/002. S2CID 250750831.
• Torrens, A B (1 January 1986). "On Angles and Angular Quantities". Metrologia. 22 (1): 1–7. Bibcode:1986Metro..22....1T. doi:10.1088/0026-1394/22/1/002. S2CID 250801509.
• Brownstein, K. R. (July 1997). "Angles—Let's treat them squarely". American Journal of Physics. 65 (7): 605–614. Bibcode:1997AmJPh..65..605B. doi:10.1119/1.18616.
• Lévy-Leblond, Jean-Marc (September 1998). "Dimensional angles and universal constants". American Journal of Physics. 66 (9): 814–815. Bibcode:1998AmJPh..66..814L. doi:10.1119/1.18964.
• Foster, Marcus P (1 December 2010). "The next 50 years of the SI: a review of the opportunities for the e-Science age". Metrologia. 47 (6): R41–R51. doi:10.1088/0026-1394/47/6/R01. S2CID 117711734.
• Mohr, Peter J; Phillips, William D (1 February 2015). "Dimensionless units in the SI". Metrologia. 52 (1): 40–47. arXiv:1409.2794. Bibcode:2015Metro..52...40M. doi:10.1088/0026-1394/52/1/40.
• Quincey, Paul (1 April 2016). "The range of options for handling plane angle and solid angle within a system of units". Metrologia. 53 (2): 840–845. Bibcode:2016Metro..53..840Q. doi:10.1088/0026-1394/53/2/840. S2CID 125438811.
• Mills, Ian (1 June 2016). "On the units radian and cycle for the quantity plane angle". Metrologia. 53 (3): 991–997. Bibcode:2016Metro..53..991M. doi:10.1088/0026-1394/53/3/991. S2CID 126032642.
• Quincey, Paul (1 October 2021). "Angles in the SI: a detailed proposal for solving the problem". Metrologia. 58 (5): 053002. arXiv:2108.05704. Bibcode:2021Metro..58e3002Q. doi:10.1088/1681-7575/ac023f. S2CID 236547235.
• Leonard, B P (1 October 2021). "Proposal for the dimensionally consistent treatment of angle and solid angle by the International System of Units (SI)". Metrologia. 58 (5): 052001. Bibcode:2021Metro..58e2001L. doi:10.1088/1681-7575/abe0fc. S2CID 234036217.
• Mohr, Peter J; Shirley, Eric L; Phillips, William D; Trott, Michael (23 June 2022). "On the dimension of angles and their units". Metrologia. 59 (5): 053001. arXiv:2203.12392. Bibcode:2022Metro..59e3001M. doi:10.1088/1681-7575/ac7bc2.
External links
Wikibooks has a book on the topic of: Trigonometry/Radian and degree measures
Look up radian in Wiktionary, the free dictionary.
• Media related to Radian at Wikimedia Commons
SI units
Base units
• ampere
• candela
• kelvin
• kilogram
• metre
• mole
• second
Derived units
with special names
• becquerel
• coulomb
• degree Celsius
• farad
• gray
• henry
• hertz
• joule
• katal
• lumen
• lux
• newton
• ohm
• pascal
• radian
• siemens
• sievert
• steradian
• tesla
• volt
• watt
• weber
Other accepted units
• astronomical unit
• dalton
• day
• decibel
• degree of arc
• electronvolt
• hectare
• hour
• litre
• minute
• minute and second of arc
• neper
• tonne
See also
• Conversion of units
• Metric prefixes
• 2005–2019 definition
• 2019 redefinition
• Systems of measurement
• Category
| Wikipedia |
The responses of the four main substitution mechanisms of H in olivine to H2O activity at 1050 °C and 3 GPa
Peter M. E. Tollan1,2,
Rachel Smith1,
Hugh St.C. O'Neill1 &
Jörg Hermann1,2
The water solubility in olivine \( \left({C}_{{\mathrm{H}}_2\mathrm{O}}\right) \) has been investigated at 1050 °C and 3 GPa as a function of water activity \( \left({a}_{{\mathrm{H}}_2\mathrm{O}}\right) \) at subsolidus conditions in the piston-cylinder apparatus, with \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) varied using H2O–NaCl fluids. Four sets of experiments were conducted to constrain the effect of \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) on the four main substitution mechanisms. The experiments were designed to grow olivine in situ and thus achieve global equilibrium (G-type), as opposed to hydroxylating olivine with a pre-existing point-defect structure and impurity content (M-type). Olivine grains from the experiments were analysed with polarised and unpolarised FTIR spectroscopy, and where necessary, the spectra have been deconvoluted to quantify the contribution of each substitution mechanism. Olivine buffered with magnesiowüstite produced absorbance bands at high wavenumbers ranging from 3566 to 3612 cm−1. About 50% of the total absorbance was found parallel to the a-axis, 30% parallel to the b-axis and 20% parallel to the c-axis. The total absorbance and hence water concentration in olivine follows the relationship of \( {C}_{{\mathrm{H}}_2\mathrm{O}}\propto {a_{{\mathrm{H}}_2\mathrm{O}}}^2 \), indicating that the investigated defect must involve four H atoms substituting for one Si atom (labelled as [Si]). Forsterite buffered with enstatite produced an absorbance band exclusively aligned parallel the c-axis at 3160 cm−1. The band position, polarisation and observed \( {C}_{{\mathrm{H}}_2\mathrm{O}}\propto {a}_{{\mathrm{H}}_2\mathrm{O}} \) are consistent with two H substituting for one Mg (labelled as [Mg]). Ti-doped, enstatite-buffered olivine displays absorption bands, and polarisation typical of Ti-clinohumite point defects where two H on the Si-site are charge-balanced by one Ti on a Mg-site (labelled as [Ti]). This is further supported by \( {C}_{{\mathrm{H}}_2\mathrm{O}}\propto {a}_{{\mathrm{H}}_2\mathrm{O}} \) and a 1:1 relationship of molar H2O and TiO2 in these experiments. Sc-doped, enstatite-buffered experiments display a main absorption band at 3355 cm−1 with \( {C}_{{\mathrm{H}}_2\mathrm{O}}\propto {a_{{\mathrm{H}}_2\mathrm{O}}}^{0.5} \) and a positive correlation of Sc and H, indicating the coupled substitution of a trivalent cation plus a H for two Mg (labelled as [triv]). Our data demonstrate that extreme care has to be taken when inferences from experiments conducted at \( {a}_{{\mathrm{H}}_2\mathrm{O}}=1 \) are applied to the mantle, where in most cases, a low \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) persists. In particular, the higher exponent of the [Si] substitution mechanism means that the contribution of this hydrous defect to total water content will decrease more rapidly with decreasing \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) than the contributions of the other substitution mechanisms. The experiments confirm previous results that the [Mg] mechanism holds an almost negligible amount of water under nearly all T-P-fO2-fH2O conditions that may be anticipated in nature. However, the small amounts of H2O we find in substituting by this mechanism are similar in the experiments on forsterite doped with either Sc or Ti to those in the undoped forsterite at equivalent \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) (all buffered by enstatite), confirming the assumption that, thermodynamically, \( {C}_{{\mathrm{H}}_2\mathrm{O}} \) substituting by each mechanism does not depend on the water concentration that substitutes by other mechanisms.
Trace concentrations of hydrogen-bearing point defects can change the mechanical properties of olivine and the dominant phase of the Earth's upper mantle, producing a profound effect on mantle rheology (Demouchy et al. 2007; Karato et al. 1986; Mei and Kohlstedt 2000). Such defects also influence transport properties like electrical conductivity (Karato and Wang 2013). Fourier transform infrared spectroscopy (FTIR) shows that the H substitutes by bonding to O2− anions, producing OH− species according to the generalised reaction O2− + H2O = 2OH−. The H incorporation thus depends on the fugacity of H2O, justifying its colloquial designation as "water". Experimental studies on synthetic systems have identified four principal substitution mechanisms, which can be identified from their different O–H stretching modes, using FTIR spectroscopy. Following Kovacs et al. (2010), we use [Si] to denote the substitution mechanism whereby H is charge-balanced by silicon vacancies, [Mg] for H charge-balanced by Mg vacancies, [triv] for H co-substituting with a trivalent cation, and [Ti] for H co-substituting with titanium. The thermodynamic equilibria describing these four substitutions are:
$$ \left[\mathrm{Si}\right]:2{\mathrm{H}}_2\mathrm{O}+{\mathrm{Mg}}_2{\mathrm{SiO}}_4={\mathrm{Mg}}_2{\mathrm{H}}_4{\mathrm{O}}_4+{\mathrm{SiO}}_2 $$
$$ \left[\mathrm{Mg}\right]:{\mathrm{H}}_2\mathrm{O}+0.5\ {\mathrm{Mg}}_2{\mathrm{SiO}}_4+0.5\ {\mathrm{SiO}}_2={\mathrm{Mg}\mathrm{H}}_2{\mathrm{SiO}}_4 $$
$$ \left[\mathrm{triv}\right]:\ 0.5\ {\mathrm{H}}_2\mathrm{O}+{\mathrm{SiO}}_2+{\mathrm{R}}^{3+}{\mathrm{O}}_{1.5} = {\mathrm{R}}^{3+}{\mathrm{H}\mathrm{SiO}}_4 $$
$$ \left[\mathrm{Ti}\right]:\ {\mathrm{H}}_2\mathrm{O}+0.5\ {\mathrm{Mg}}_2{\mathrm{SiO}}_4+{\mathrm{TiO}}_2={\mathrm{Mg}\mathrm{TiH}}_2{\mathrm{O}}_4+0.5\ {\mathrm{SiO}}_2 $$
which give the following equilibrium constants for pure forsterite (that is, in the approximation that \( {\mathrm{a}}_{\mathrm{M}{\mathrm{g}}_2\mathrm{Si}{\mathrm{O}}_4}^{\mathrm{ol}} \) is unity):
$$ {\mathrm{K}}^{\left[\mathrm{Si}\right]}={a}_{{\mathrm{Mg}}_2{\mathrm{H}}_4{\mathrm{O}}_4}^{ol}{a}_{{\mathrm{SiO}}_2}{\left(\mathrm{f}\left({\mathrm{H}}_2\mathrm{O}\right)\right)}^{-2} $$
$$ {\mathrm{K}}^{\left[\mathrm{Mg}\right]}={a}_{{\mathrm{MgH}}_2{\mathrm{SiO}}_4}^{ol}{\left({a}_{{\mathrm{SiO}}_2}\right)}^{-0.5}{\left(\mathrm{f}\left({\mathrm{H}}_2\mathrm{O}\right)\right)}^{-1} $$
$$ {\mathrm{K}}^{\left[\mathrm{triv}\right]}={a}_{{\mathrm{R}}^{3+}{\mathrm{H}\mathrm{SiO}}_4}^{ol}{\left({a}_{{\mathrm{SiO}}_2}\right)}^{-1}{\left({a}_{{\mathrm{R}}^{3+}{\mathrm{O}}_{1.5}}\right)}^{-1}{\left(\mathrm{f}\left({\mathrm{H}}_2\mathrm{O}\right)\right)}^{-0.5} $$
$$ {\mathrm{K}}^{\left[\mathrm{Ti}\right]}={a}_{{\mathrm{MgTiH}}_2{\mathrm{O}}_4}^{ol}{\left({a}_{{\mathrm{SiO}}_2}\right)}^{0.5}{\left({a}_{{\mathrm{TiO}}_2}\right)}^{-1}{\left(\mathrm{f}\left({\mathrm{H}}_2\mathrm{O}\right)\right)}^{-1} $$
The four substitution mechanisms therefore depend on the activity of silica (a SiO2) in different ways, and in the case of the [triv] and [Ti] substitutions, the chemical potentials (effectively, the availability) of the relevant enabling components, R3+O1.5 and TiO2, respectively. But the most significant implication is that the way that H is incorporated in olivine may vary with the fugacity of H2O, depending on whether the H atoms needed to achieve charge balance are completely associated with the point defect by being bonded to the oxygen atoms surrounding it in specific locations, or are disordered over the lattice by being bonded to oxygen atoms without regard to location. For example, if the four H atoms of the [Si] mechanism are bonded to oxygen atoms surrounding the Si site vacancy to produce local charge balance (that is short-range order), as recently shown by Xue et al. (2017), then the activity of the Mg2H4O4 component would be proportional to its mole fraction, such that \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{Si}\right]} \propto \mathrm{f}{\left({\mathrm{H}}_2\mathrm{O}\right)}^2 \). Alternatively, if the H atoms were bonded to four oxygen atoms at random positions in the lattice without short-range order, the configurational entropy of the Mg2H4O4 component would be correspondingly greater, with its activity being proportional to its mole fraction to the power of four (or five, if the configurational entropy due to the Si vacancy itself is included), such that \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{Si}\right]} \propto \mathrm{f}{\left({\mathrm{H}}_2\mathrm{O}\right)}^4 \) or f(H2O)5. For the [triv] mechanism, there is only the one H atom per formula unit of R3+HSiO4, so if the position where the H bonds to an oxygen is determined by short-range order with respect to where the R3+ cation is substituting, the activity of the R3+HSiO4 component will again be proportional to its mole fraction, but with the result that \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{triv}\right]} \propto \mathrm{f}{\left({\mathrm{H}}_2\mathrm{O}\right)}^{0.5} \). The close association of H substituting by the [triv] mechanism with the R3+ cation is demonstrated by the correlation of the wavelength of the [triv] infrared absorption with the ionic radius of the R3+ cation (Berry et al. 2007a). If local charge balance proves to be the case, then even at one specific temperature and pressure, the way that "water" is incorporated into olivine should change with the activity of H2O, \( {a}_{{\mathrm{H}}_2\mathrm{O}} \), defined as f(H2O)/f(H2O)°, where f(H2O)° is the fugacity of pure H2O at the T and P of interest. This is an important consideration when laboratory observations are to be extrapolated to olivine in natural environments such as the Earth's mantle. In the laboratory, experiments are typically conducted at high \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) to maximise H2O contents, which facilitates measurements, but in the Earth's mantle, \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) is lowered by the presence of other components, and an upper limit to \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) at a given T, P and composition is imposed by partial melting (Green et al. 2010). Whenever amphibole is present, \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) is constrained by amphibole-pyroxene-olivine equilibria and is considerably lower than if a free aqueous fluid phase were present (Lamb and Popp 2009).
The aim of the experimental study presented here is to test the relationships between \( {C}_{{\mathrm{H}}_2\mathrm{O}} \) and \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) for the four principal substitution mechanisms. We chose a representative condition of 3.0 GPa and 1050 °C for experimental convenience, with \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) varied using H2O–NaCl mixtures. Although the [triv] substitution in natural olivines is likely mostly due to Cr3+ and Fe3+ (e.g. Tollan et al. 2015), both these elements also occur in olivine in a 2+ valence state, Fe predominantly so, making the determination of the amounts of Fe3+ or Cr3+ from analysed Fe or Cr impractical. Yet, to understand the [triv] substitution mechanism, it is desirable to know the concentration of the R3+ cation in octahedral coordination. For this reason, we sought a redox-insensitive element with a substantial solubility in forsterite, and a large enough ionic radius to substitute into its octahedral sites only—unlike Al, which may also be important in promoting water solubility in olivine (Grant et al. 2007a). There is really only one choice, Sc3+.
Preparation of starting materials
For experiments investigating [Ti], [triv] and [Mg] hydrous defects, powdered synthetic Ti- and Sc-doped forsterite was prepared by a process of solution-gelation. Magnesium nitrate was dissolved in water slightly acidified by nitric acid, with Ti and Sc added to produce forsterite containing 2000 ppm of Ti and Sc, respectively, using ammonium bis(oxalato)oxotitanate(IV) hydrate or scandium oxide. Concentrations in this paper are given in ppm by weight, i.e., μg g−1. Tetra-ethyl orthosilicate was added with ethanol, after which a few drops of ammonia initiated the gelation process, with the mixture then left overnight to allow complete precipitation. The precipitate was dehydrated by heating on a hot plate at low temperatures for 24 h followed by intensive heating, first over a Bunsen burner and then in a box furnace at 600 °C. Pure forsterite, produced by the same method (but without the addition of Sc and Ti), was then mixed with the Sc- and Ti-doped forsterite powders in various ratios to produce the starting compositions for the hydration experiments. Concentrations of Sc and Ti in recovered olivine crystals are given in Table 1. In order to buffer the silica activity, enstatite powder was also produced by sintering synthetic forsterite with SiO2 in the appropriate stoichiometric proportions. For experiments investigating the [Si] defect, powdered San Carlos olivine was run with synthetic magnesiowüstite (Fe0.7Mg0.3O) to produce a low a SiO2, which enhances water substitution by this mechanism.
Table 1 Initial compositions of the H2O-NaCl fluids and dopant concentrations of recovered forsterite measured by LA-ICP-MS
Piston-cylinder experiments
One challenge of the experiments was to produce crystals of H2O-bearing forsterite or olivine large enough for FTIR spectroscopy, but at sufficiently low temperature to avoid melting and excessive dissolution of silica in the fluid, which informed the choice of 1050 °C at 3 GPa. The crystals were grown from the starting materials in a piston-cylinder apparatus. Powders of pure or doped forsterite were packed inside Pt capsules, with a layer of enstatite powder to buffer a SiO2, and oxygen fugacity was controlled by the intrinsic conditions of the assembly (estimated to be approximately equivalent to the fayalite-magnetite-quartz buffer); at which conditions, the only redox-sensitive element, Ti, should be essentially all Ti4+ (Mallmann and O'Neill 2009). The experiments using San Carlos olivine and magnesiowüstite were loaded into Au capsules, with oxygen fugacity internally buffered using layers of Re and ReO2 powders (Pownceby and O'Neill 2000) to ensure a constant Fe2+/Fe3+ in the olivine and magnesiowüstite. In all experiments, \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) was controlled by adding variable amounts of NaCl powder and distilled H2O. Water was added with a microsyringe, and the amount added was monitored with a microbalance before and after welding to check for evaporative loss. The capsules were contained in MgO-graphite-glass-NaCl assemblies and run in an end-loaded Boyd-type piston-cylinder apparatus. Experiments were performed at 3.0 GPa and 1050 °C for 168 h with temperature monitored by a type B (Pt0.7Rh0.3-Pt0.94Rh0.06) thermocouple contained within mullite. The experiment was ended by switching off the power. The capsule was pierced and inspected for the presence of free fluid, which confirmed that fluid had not escaped during the run. The capsule was then cut with a scalpel and peeled open. Relatively large forsterite (or olivine) crystals were recovered by picking under an optical microscope.
All the forsterite crystals that were subsequently analysed grew during the experiments, and the finely ground San Carlos olivine completely recrystallised, which is necessary to achieve the equilibrium point-defect structure (Matveev et al. 2001). That all the crystals characterised in this study grew during their synthesis in equilibrium with the buffering assemblages and fluid phase is in contrast to the experimental approach that hydroxylates pre-existing crystals (e.g. Bai and Kohlstedt 1993; Zhao et al. 2004; Gaetani et al. 2014). In this latter approach, H moves into the crystal by solid-state diffusion, but the pre-existing point-defect structure of the olivine crystal is conserved. This phenomenon was exploited by Bai and Kohlstedt (1993) to study olivine pre-equilibrated at different oxygen fugacities at atmospheric pressure and then hydroxylated at one oxygen fugacity at a given temperature and pressure. In such experiments, the point-defect structure under which the water is incorporated is metastable and distinguishable to that expected at global equilibrium (Matveev et al. 2001). A spectacular example of conserving pre-existing point-defect structure is the study of Jollands et al. (2016) on the decoration of Ti3+ in forsterite to form the [triv] substitution, by hydroxylation under oxidizing conditions. Stabilizing Ti3+ in forsterite requires highly reducing conditions (Mallmann and O'Neill 2009); accordingly, Jollands et al. (2016) prepared their material using a CO–CO2 gas mixture with 97% CO at 1500 °C and 1 bar. The hydroxylation was then carried out at oxygen fugacities up to twelve orders of magnitude higher, imposed by the Re–ReO2 and Ag–Ag2O buffers at 850 °C and 1.5 GPa (Jollands et al. 2016).
There is unnecessary controversy in the literature on the interpretation of experimental results on the incorporation of water in olivine due to the failure to distinguish clearly between the two types of experiments, those aimed at achieving global equilibrium under the conditions at which the water is incorporated and those aimed at hydroxylating existing point-defect structures and compositions, including the minor-element concentrations that determine the amounts of water associated with the [triv] substitution mechanisms. We suggest that to reduce the confusion, all experiments should be labelled "G-type" if global equilibrium is the target and "M-type" if a metastable equilibrium is sought. The experiments reported here are G-type. The interpretation of M-type experiments is likely to be the more difficult by far, because the metastable substitutions depend on the rates of diffusion of the point-defects and/or elements that enable them; in contrast, the global equilibrium that is the aim of a G-type experiment is independent of transport phenomena, by definition. The interpretation of M-type experiments may therefore be aided by H diffusion studies (e.g. Mackwell and Kohlstedt 1990; Demouchy and Mackwell 2006; Padron-Navarta et al. 2014). One potential problem is if the experimental products fall in the no man's land between the two types.
The forsterite and olivine crystals were analysed by FTIR spectroscopy, using a Bruker Tensor 27 infrared spectrometer with a liquid nitrogen-cooled mercury cadmium telluride detector coupled to a Bruker Hyperion infrared microscope. The analysis chamber was continuously purged with dry air in order to minimise interference of the intrinsic forsterite absorbance band by atmospheric background. Individual crystals were placed on a 100 μm Cu wire grid and measured in transmission mode with polarised and unpolarised light. For polarised measurements, crystal orientation was deduced through comparison of the silicate overtone region (~1200–2200 cm−1) with reference spectra for each principal orientation (Lemaire et al. 2004). Background analyses were taken at regular intervals, and both sample and background measurements were the average of 64 individual scans at a resolution of 4 cm−1. After analysis, the acquired spectra were processed using the Bruker OPUS© software package. Any residual atmospheric contamination was minimised by application of the atmospheric compensation tool, followed by baseline correction using the "concave rubber band" software tool using 64 baseline points and four iterations. Occasionally, a further baseline correction was applied in order to remove broad absorbance bands due to fluid inclusions.
Quantification of data
The areas beneath the major absorbance bands from each experiment were calculated using the integration function of the OPUS software package, and the resulting absorbance value was normalised to a thickness of 1 cm. Crystal thickness was determined by exploiting the relationship with absorbance in the silica overtone region (1625–2150 cm−1), whereby thickness (in microns) is calculated by dividing the integrated absorbance of the overtones (between 1625 and 2150 cm−1) by the appropriate coefficient. For unpolarised measurements, the coefficient of Shen et al. (2014) was used (0.553). For polarised measurements, coefficients specific for each principal axis were determined by conducting unpolarised and polarised measurements on the same crystal and comparing the respective overtone absorbances. By doing this for multiple crystals, we established coefficients of 0.75, 0.50 and 0.56 for a, b and c-axes, respectively, which were then used in all subsequent thickness calculations for polarised data. Water contents for polarised measurements were calculated by summing the normalised absorbance of each major band along each of the three principal axes and dividing by the integral absorption coefficient. For each experiment design, only the absorbance due to the principal bands is reported (see Table 2 for the integration ranges used), guided by previous experimental and theoretical studies that have reported the association of different defects with specific band positions (e.g., Lemaire et al. 2004; Berry et al. 2005; Walker et al. 2007; Kovacs et al. 2010; Umemoto et al. 2011; Ingrin et al. 2013). Any additional less intense bands at distinctly different wavenumbers were not included in calculations of water, with the exception of bands attributed to Mg vacancies, which are reported separately. The reported uncertainty in the water contents is the standard deviation from measurements of many individual crystals, and is a measure of precision rather than accuracy, as it does not include systematic or semi-systematic uncertainties due to additional factors such as errors in published absorption coefficients, thickness calibrations and baseline corrections, which are likely to be significant, but are difficult to quantify. For the unpolarised absorbances, the measurements from at least 12 crystals from each experiment were averaged, divided by the same integral absorbance coefficient and then multiplied by three, following the method of Kovacs et al. (2008). Uncertainty in water contents following this method is dependent on the degree of polarisation of the defects, the absolute water contents and any unintentional bias in orientation due to crystal shape. Based on this and the comparison with Fig. 1 of Kovacs et al. (2008), an uncertainty of 10% was applied to all unpolarised data.
Table 2 Calculated water concentrations in each of the three principal crystallographic orientations using polarised FTIR data and also total water calculated from unpolarised FTIR data using the method of Kovacs et al. (2008)
Comparison of water concentrations for different point defects calculated using polarised light and unpolarised light, following the method of Kovacs et al. (2008). Note that the calculated water contents are based only on the principal absorbance bands from each experiment, as discussed in the "Methods" section. Integration ranges are as follows: [Si] 3650–3422 cm−1; [Ti] 3586–3495 cm−1; [triv] 3375–3293 cm−1
Spectral deconvolution
When appropriate, complex spectra were deconvoluted in order to isolate the absorbance of individual, overlapping bands. We used a similar method to that of Tollan et al. (2015): bands were fit to a Gaussian function by linear least squares regression, with the optimum halfwidth for each band held constant between spectra such that only band height was allowed to vary. The starting point for each model was the band positions from Ti-free [Mg] experiments. The positions of any additional bands were deduced through the regression procedure.
LA-ICP-MS
The concentrations of Ti and Sc were determined by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The laser ablation system consists of a CompexPro 110F ArF excimer laser (193 nm wavelength) with a custom-built two-volume ablation cell for rapid aerosol extraction and washout. The laser was run with a frequency of 5 Hz in energy constant mode, which ensured a stable fluence at the sample surface of 3 to 4 J/cm2. A focused spot with a diameter of 37 μm was used for sample and standard analyses. The cell was connected to an Agilent 7700x ICP-MS, and ablated material was transported to the ICP-MS in a stream of ultra-high purity He and Ar. Analyses were acquired as a time-resolved signal with 20 s of background followed by 40 s of counts on ablated material. NIST 610 and 612 glasses were used as calibration standards, and Si was used as internal standard.
Each experiment yielded from three to 15 crystals large enough and sufficiently free of fluid inclusions to obtain quantifiable FTIR data. Polarised spectra from the three principal axes were recovered from each experiment, complemented by unpolarised spectra. Water concentrations calculated from both polarised spectra and unpolarised spectra following the method described by (Kovacs et al. 2008)) are in very good agreement for average water concentrations calculated from each experiment, ranging from 50 to 2500 ppm (Fig. 1), but particularly at lower water concentrations which are within the range typical of natural mantle olivine. This is consistent with the theory of Sambridge et al. (2008), as implemented by Kovacs et al. (2008), and illustrates the viability of using unpolarised data to calculate accurate water concentrations in olivine with a variety of defect populations. The concentration of H2O associated with each species of hydrous point defect shows a clear relationship with the activity of water of the experiment (Fig. 2). Distinctly different polarised band positions and shapes can be identified from experiments investigating each of the different defect types.
Variation in water concentration with XH2O in H2O–NaCl fluids for the four different point defects. The calculated water contents are based only on the principal absorbance bands from each experiment, as discussed in the "Methods" section of the text. The best fit curves are from least squares non-linear regression with weighting of both variables according to the standard deviations in Tables 2 and 3
Low SiO2 activity experiments: [Si]
Olivines from experiments conducted at low silica activity (buffered by magnesiowüstite) are characterised by complex polarised spectra consisting of four major absorbance bands and six further minor absorbance bands (Fig. 3). Bands are moderately to strongly polarised, with very small FWHM (full width at half maximum). Parallel to E||a, the strongest bands are centred at 3612 cm−1 followed by 3578 cm−1. The band at 3612 cm−1 shows strong asymmetry, with a prominent shoulder positioned at 3600 cm−1, whilst the band at 3578 cm−1 shows minor asymmetry with a small shoulder positioned at 3550 cm−1. Parallel to E||b, the strongest band is centred at 3550 cm−1, followed by the band at 3578 cm−1 which has similar intensity as in the E||a direction. The band at 3612 cm−1 is also present, but at a much lower intensity than in the E||a direction. Parallel to E||c, only one major band is present, centred at 3566 cm−1. This single band however is strongly asymmetric, with a double shoulder at 3545 and 3533 cm−1. The band at 3612 cm−1 is again present, this time at even lower intensities than along either E||a or E||b. None of the other major bands from E||a or E||b are above the limit of detection.
Polarised spectra measured parallel to principal crystallographic axes for [Si] experiments, ordered from highest water activity (top) to lowest (bottom). Spectra are the average of multiple measurements, normalised to 1 cm and offset for clarity. The indicated band positions are (1) 3612 cm−1, (2) 3578 cm−1, (3) 3550 cm−1, (4) 3612 cm−1, (5) 3578 cm−1, (6) 3550 cm−1, (7) 3477 cm−1, (8) 3612 cm−1 and (9) 3566 cm−1
Water associated to titanium: [Ti]
Two series of Ti-doped experiments were conducted: one set at varying \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) but fixed bulk Ti concentration and another at fixed \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) and varying bulk Ti concentration. Spectra from both types of experiment shared the same absorbance features. Forsterite from experiments doped with Ti reveals less complex spectra than those from [Si] experiments, with only two major bands and one minor band, which are present in each direction (Fig. 4). The two most prominent bands are centred at 3572 and 3525 cm−1, consistent with previous studies (Berry et al. 2005; Kovacs et al. 2010; Padrón-Navarta et al. 2014). The 3572 cm−1 band is typically the most intense of the two, by a factor of approximately 2 and 1.5 in the E||a and E||b directions respectively. In the E||c direction, the 3572 cm−1 is more intense at high \( {a}_{{\mathrm{H}}_2\mathrm{O}} \), but the ratio of the two peaks decreases to unity as a function of decreasing \( {a}_{{\mathrm{H}}_2\mathrm{O}} \). Due to an overlapping contribution from [Si] bands with the 3572 cm−1 band (Padrón-Navarta et al. 2014), spectra from [Ti] experiments were deconvoluted as described in the "Methods" section, and the following discussion of [Ti] data is based on the results of this deconvolution (Fig. 7, Table 4). Titanium-doped forsterite covers a range of Ti contents, from 330 to 88 ppm (Table 1). Multiple crystals from the same experiment had similar concentrations, with standard deviations of 5–15%.
Polarised spectra measured parallel to principal crystallographic axes for [Ti] experiments, ordered from highest water activity (top) to lowest (bottom). Spectra are the average of multiple measurements, normalised to 1 cm and offset for clarity. The indicated band positions are: (1) 3612 cm−1, (2) 3572 cm−1, (3) 3525 cm−1, (4) 3612 cm−1, (5) 3572 cm−1, (6) 3525 cm−1, (7) 3572 cm−1, (8) 3525 cm−1, (9) 3220 cm−1, (10) 3160 cm−1
Water associated to trivalent cations: [triv]
As with [Ti] experiments, two series were conducted: one with fixed \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) and another with fixed Sc concentration, and likewise, spectra from both experiment types were consistent with each other. Scandium-doped forsterite have simple spectra which are consistent with the study of Berry et al. (2007a), with one major band with very small FWHM at 3355 cm−1, which appears strongly in all three directions but most intensely along E||a and a minor band at 3320 cm−1, found principally along E||c (Fig. 5). A further minor, much broader band centred at 3160 cm−1 is similar to those found in [Ti] and [Mg] experiments and thus unrelated to the presence of Sc. Average Sc contents of forsterite recovered from the different experiments ranged from 179 to 1228 ppm. Standard deviations are typically less than 10%, but rarely up to 20%.
Polarised spectra measured parallel to principal crystallographic axes for [triv] experiments (Sc-doped), ordered from highest water activity (top) to lowest (bottom). Spectra are the average of multiple measurements, normalised to 1 cm and offset for clarity. The indicated band positions are (1) 3355 cm−1, (2) 3355 cm−1, (3) 3320 cm−1, (4) 3355 cm−1, (5) 3320 cm−1 and (6) 3160 cm−1
High SiO2-activity experiments: [Mg]
Two experiments were run in the system MgO–SiO2–H2O with excess enstatite and without Ti or Sc present to create hydrated Mg vacancies, to complement the results from both the [Ti] and [triv] experiments, which were also enstatite-buffered and produced the same absorbance bands in addition to those from the [Ti] and [triv] substitutions (Figs. 4 and 5). The absorption occurs at lower wavenumbers, with two broad bands typically present, one at 3220 cm−1 and another at 3160 cm−1, with the former band generally more intense (an exception to this is discussed later). Neither of these bands was produced in [Si] experiments, which were conducted at much lower silica activity. Additional minor bands are observed in the [Mg] experiments, the most prominent of which is centred at 3567 cm−1 followed by a number of less intense bands between 3300 and 3500 cm−1.
Band positions and point defect assignments
Olivine and forsterite from [Si], [Ti], [triv] and [Mg] experiments all have distinctive infrared spectra, indicating that the experiments were successful in producing different point defects. This is confirmed by the different polarisations of the principal bands for olivine/forsterite from each type of experiment (Fig. 6), requiring that O–H bonds be configured differently for each defect. Bands for olivine from [Si] experiments show the strongest absorbance along E||a, followed by E||b and E||c. For forsterite from [Ti] experiments, the order of strongest absorbance is E||a, E||c and then E||b, whilst for forsterite from [triv] experiments, E||a shows the strongest absorbance, with E||b and E||c the same within uncertainty. [Mg] bands meanwhile are only present along E||c. We have assigned the bands from each experiment to a different point defect stoichiometry based on the design and intent of each experiment and through comparison with similar experiments/spectra from the literature.
Percentage that the absorbance from each orientation contributes to the total absorbance, for different point-defect types. The absorbance associated with [Mg] defects is only detectable along E||c
Bands for olivines from [Si] experiments (Fig. 3) have similar or identical positions with those in other low silica activity (wüstite- or magnesiowüstite-buffered) experimental studies (Aubaud et al. 2007; Lemaire et al. 2004; Padrón-Navarta et al. 2014; Withers and Hirschmann 2008), but also occur in experiments buffered at high silica activity conducted at significantly higher pressures (Smyth et al. 2006), with the highest intensity bands located at high wavenumbers (>3500 cm−1). The precise location of the bands is occasionally offset by 1–8 cm−1: for example, the band at 3550 cm−1 is found at slightly higher wavenumbers in other studies, likely due to differences in the configuration of Fe in M sites adjacent to the hydrated Si vacancy (Blanchard et al. 2017). The general complexity and number of bands is much greater for defects in olivine from [Si] experiments than any other defect type investigated, indicating the involvement of multiple hydroxyl groups in a number of configurations, as explored through atomistic simulations (Balan et al. 2011; Umemoto et al. 2011; Walker et al. 2007). These observations are all consistent with hydrogen bonding to oxygen atoms surrounding silicon vacancies in the olivine structure, as denoted in Eq. 1 and supported by other studies investigating hydrous defect configurations (Ingrin et al. 2013; Blanchard et al. 2017).
Forsterite from the [Ti] experiments also displays absorbance at high wavenumbers, with bands at 3572 and 3524 cm−1 being the most intense (Fig. 4). These band positions are unique to [Ti] experiments, and are identical in wavenumber to previous experimental studies of Ti-doped olivine/forsterite (Berry et al. 2005; Kovacs et al. 2010; Padrón-Navarta et al. 2014). They are also the most commonly observed bands in natural upper mantle olivines sampled in spinel peridotite xenoliths, although their contribution to the total absorbance for a given sample varies significantly (Berry et al. 2005; Denis et al. 2013; Schmädicke et al. 2013). Precise band positions generally vary slightly in the literature, which can be explained in part due to the overlap with the [Si] bands at 3578–3580 and 3566 cm−1 (Padrón-Navarta et al. 2014). As with previous studies using Ti as a dopant, we assign these bands to formation of a titano-clinohumite point defect within the forsterite structure (Eq. 2), with the resultant generation of a silicon vacancy explaining the similar wavenumbers to bands from [Si] experiments.
The [triv] experiments produced forsterite with absorbance dominated by a single intense band at 3355 cm−1, similar to the Sc-doped experiments of Berry et al. (2007a). This band, along with the minor band at 3320 cm−1, are unique to the [triv] experiments and are assigned to coupled substitution of Sc3+ and H+, with the latter providing local charge balance in octahedral site vacancies (Eq. 3). In natural olivines, bands in this wavenumber region occur sporadically, but can occasionally be the dominant defect type, particularly in olivines with very low Ti concentrations (Soustelle et al. 2013; Soustelle et al. 2010; Tollan et al. 2015). The precise position, polarisation and relative intensity of bands from natural olivines differ from those produced in our [triv] experiments, indicating that Sc is not a significant partner, as expected from the much lower concentrations of Sc in natural mantle olivine (typically 2–10 ppm). Ferric iron and Cr3+ are both much more abundant trivalent cations in natural olivine and are the likely candidates for the 3+ cation partnering H in this substitution mechanism (Tollan et al. 2015; Blanchard et al. 2017).
Two experiments were conducted specifically investigating [Mg] defects, which are generated most strongly under conditions of high silica activity (orthopyroxene/enstatite-buffered). However, identical bands were produced in all the other enstatite-buffered experiments, but not in the [Si] experiments, which were buffered at low \( {a}_{{\mathrm{SiO}}_2} \) by magnesiowüstite. The bands associated to the [Mg] defect are located at lower wavenumbers, and display characteristically broad absorbance with large FWHM, but only along E||c. Two bands can be distinguished, one centred at 3160 cm−1 and another at 3220 cm−1. Both of these bands are present in [Ti] and [Mg] experiments, with the band at 3220 cm−1 more intense. In [triv] experiments however, the band at 3160 cm−1 is more intense. Lemaire et al. (2004), Balan et al. (2011) and Umemoto et al. (2011) suggested that these band positions can be explained by protonation of vacant M1 and M2 sites. However, as pointed out by Blanchard et al. (2017), protonation of the M2 site was never explicitly modelled in these studies, and the variation in band position can be better explained by protonation of an M1 vacancy coupled with Fe2+ occupying either a neighbouring M1 or M2 site. Preferential ordering of Sc into one of the two kinds of octahedral sites would imply that vacancies concentrated in the other kind, with the H+ bonding to an O2− surrounding the vacancy, to even out the perturbation of charge density. Based on site occupancy arguments in Blanchard et al. (2017), it is perhaps more likely that the greater intensity at 3160 cm−1 in [triv] experiments indicates that Sc is partitioning more favourably onto the larger M2 site, with the vacancy and its compensating H favouring M1, assuming that Sc is behaving similarly to Fe3+. Similar or identical bands to these are commonly observed in other experimental studies and also in natural samples (e.g., Aubaud et al. 2007; Férot and Bolfan-Casanova 2012; Gaetani et al. 2014; Grant et al. 2007a, b). Band positions vary much more than bands from other defects, which may be due to the very broad and often low intensity of these bands, making it difficult to precisely locate the wavenumber of maximum intensity. In addition, partitioning of a wide range of trace and minor cations onto either or both octahedral sites could result in slightly different O–H bond lengths and an associated wavenumber shift.
Although our experimental strategy aims to emphasise only one of the four types of substitution mechanism in each group of experiments, it is important to recognise that absorption bands corresponding to some of the other substitution mechanisms are a minor feature of most of the spectra (Figs. 3, 4 and 5). A small band at 3350 cm−1 is often observed in [Si] and [Ti] experiments, most likely due to hydrogen bonding with trace amounts of Fe3+ (Berry et al. 2007a). However, the most significant additional band is the presence of relatively strong absorbance associated with silicon vacancies in [Ti] and [triv] experiments, despite the fact that they were run under enstatite-buffered conditions \( \left(\mathrm{high}\ {a}_{{\mathrm{SiO}}_2}\right) \). Whilst this observation combined with the general decrease in the ratio of water associated with Mg vacancies (i.e. \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{Mg}\right]} \)) to that associated with Si vacancies \( \left({C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{Si}\right]}\right) \) with decreasing total water content \( \left({C}_{{\mathrm{H}}_2\mathrm{O}}\right) \) in the [Ti] experiments (Table 3) is qualitatively consistent with the theoretical calculations of Walker et al. (2007), the actual values of this ratio differ from those calculated by Walker et al. (2007), which predicts a greater decrease as total water content increases (see Fig. 3 of Walker et al. (2007)). Nevertheless, the important observation here is that at lower water activities, hydrated Mg vacancies are favoured over hydrated Si vacancies when all other conditions are kept constant, the reasons for which we address next.
Table 3 Calculated water contents in ppm for the [Mg] bands (hydrated Mg vacancies), based on the integration range 3268–3107 cm−1, from [Ti] and [triv] experiments. Also reported is the water associated with silicon vacancies in [Ti] experiments, calculated from band deconvolution. See the text for further details
Relationships with water activity, \( {a}_{{\mathrm{H}}_2\mathrm{O}} \)
Significant controversy prevails regarding the issue of relating measured absorptions to water contents. Several theoretical and experimental studies have demonstrated that the absorption coefficient should vary as a function of the vibrational energy characteristic of the configuration of H in a given defect (e.g., Kovacs et al. 2010; Balan et al. 2011; Ingrin et al. 2014; Blanchard et al. 2017). Hence, there should be a negative correlation between the infrared wavenumber and absorption coefficient. At present, however, whilst computational studies are useful in defining the expected trend, absolute values are not reliable. Experimental studies on the other hand produce more accurate values, but have so far explored a relatively small number of appropriate defects. Amongst the available experimental studies, the value of 28,450 L mol−1 cm−2 from Bell et al. (2003) has been most widely applied. Kovacs et al. (2010) produced an identical value for olivine bearing [Ti] and [triv] defects, although their absorption coefficients for [Si] and [Mg] defects were compromised by the occurrence of a previously unknown defect involving boron which comprised a significant proportion of the water in the "Pakistani" olivine used in their experiments (Ingrin et al. 2014). More notably, Withers et al. (2012) determined a strikingly different value of 45,200 L mol−1 cm−2 for synthetic olivines with most of the absorbance in their infrared spectra occurring at similarly high wavenumbers to the spectra in the study of Bell et al. (2003), albeit with a different defect ([Si] as oppose to [Ti]). This study utilised elastic recoil detection analysis as the matrix-independent method of absolute H2O concentration determination, as opposed to nuclear reaction analysis which was used by Bell et al. (2003). The reason for the discrepancy between absorption coefficients is currently unclear, but since the absolute determination of water content is of minor significance for this study, we apply the more commonly used value (Bell et al. 2003), 28,450 L mol−1 cm−2, with the caveat that calculated values may be systematically wrong as a result, something we address further in the discussion. Furthermore, provided that only one substitution mechanism is being considered, use of a different calibration would change \( {C}_{{\mathrm{H}}_2\mathrm{O}} \) proportionately, and the relationship to \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) would not be affected.
The water contents, \( {C}_{{\mathrm{H}}_2\mathrm{O}} \), associated with the [Si], [Mg] and [triv] were fitted by weighted least squares regression with uncertainties in both \( {C}_{{\mathrm{H}}_2\mathrm{O}} \) and \( {X}_{{\mathrm{H}}_2\mathrm{O}} \) as given in Table 2, to the equation:
$$ \ln \left({C}_{{\mathrm{H}}_2\mathrm{O}}\right)= a+ b* \ln \left({X}_{{\mathrm{H}}_2\mathrm{O}}\right) $$
where \( {C}_{{\mathrm{H}}_2\mathrm{O}} \) is the concentration of water in ppm (μg g−1) for a particular point defect: a and b are constants and \( {X}_{{\mathrm{H}}_2\mathrm{O}} \) is the mole fraction of H2O in the H2O–NaCl fluids loaded into the capsule of the experiment, and is related to the activity of H2O by \( {a}_{{\mathrm{H}}_2\mathrm{O}}={X}_{{\mathrm{H}}_2\mathrm{O}}{\gamma}_{{\mathrm{H}}_2\mathrm{O}} \). For the purposes of this immediate discussion, we assume that γH2O, the activity coefficient, is unity (ideal mixing). This assumption will be revisited below. With this assumption, the constant b is the exponent from Eqs. 5, 6, 7 and 8, and the value derived from the regression provides a means of testing the stoichiometry of the different point defects.
Exponents for [Si], [triv] and [Mg] experiments are 2.5 ± 0.2, 0.6 ± 0.1 and 1.1 ± 0.1, which are close to but consistently higher than the predicted exponents of 2, 0.5 and 1. Such similar values are a strong indication that the substitution stoichiometries given by equilibria 5 to 7 are substantially correct. The data for the [Ti] experiments present the problem that there is substantial overlap of bands associated with [Si] with those that appear in the [Ti] experiments in the wavenumber region 3650–3450 cm−1. This is confirmed by [Mg] experiments, which were performed under identical conditions but without being doped with Ti. Spectra from these forsterite crystals show absorption bands at 3578 cm−1 along E||a and 3567 cm−1 along E||c, bands which are most prominently observed in experiments conducted at lower silica activity and higher pressure (Lemaire et al. 2004; Mosenfelder et al. 2011). In order to obtain the correct exponent for water associated with Ti, spectra were deconvoluted into 7–8 bands in an effort to isolate the overlapping, defect-specific bands (Fig. 7, Table 4). Following deconvolution, the areas corresponding to both the principal [Ti] bands and subsidiary bands associated with [Si] were calculated (Tables 2 and 3). Regression of these data gives exponents of [Ti] 0.95 ± 0.09, while for [Si], we obtain 2.6 ± 0.4, in good agreement with the value of 2.5 ± 0.2 calculated for olivine from the [Si] experiments. The agreement is evidence that we have successfully deconvoluted the absorbances associated with the [Si] and [Ti] defects from their measured sum.
Examples of deconvoluted spectra along E||a, E||b and E||c for [Ti] experiments. The deconvolution is necessary to isolate the absorbance due to [Ti] from that due to [Si]
Table 4 Calculated water contents associated with each band deconvoluted from the measured spectra of [Ti] experiments. The method used is explained in the text
There is a tendency for the exponents to be somewhat higher than those expected from the stoichiometry of reactions 1 to 4. This is explicable if there is a negative departure from ideal mixing in H2O–NaCl fluids. The data can be reconciled to the expected values of the exponents if we describe the H2O–NaCl binary at 1050 °C and 3 GPa by a regular solution model, such that RT ln \( {\gamma}_{{\mathrm{H}}_2\mathrm{O}}={W}_{{\mathrm{H}}_2\mathrm{O}-\mathrm{NaCl}}{\left(1-{X}_{{\mathrm{H}}_2\mathrm{O}}\right)}^2 \), with an interaction parameter \( {W}_{{\mathrm{H}}_2\mathrm{O}-\mathrm{NaCl}} \) of ~−5 kJ/mol. This is not in agreement with the experimental study of Aranovich and Newton (1996), who found that while \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) was nearly proportional to XH2O in NaCl–H2O solutions at 0.2 GPa, with increasing pressure, \( {a}_{{\mathrm{H}}_2\mathrm{O}} - {\mathrm{X}}_{{\mathrm{H}}_2\mathrm{O}} \) relations tended towards \( {a}_{{\mathrm{H}}_2\mathrm{O}}\propto {\left({\mathrm{X}}_{{\mathrm{H}}_2\mathrm{O}}\right)}^2 \). If this were the relationship for our experiments, then, although the relative differences between exponent terms for different defects would remain the same, the absolute values would differ, casting doubt on our identifications of their stoichiometries. However, we note that our experiments were conducted at a higher temperature and pressure than the range investigated by Aranovich and Newton (1996), which extends only to 900 °C and 1.5 GPa. It should also be noted that we have not allowed for other components, such as SiO2, dissolving in H2O–NaCl fluids, which would lower \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) compared to the simple binary.
Composition and water incorporation
The association between olivine composition and water incorporation remains a subject of debate in the literature, particularly the role of certain trace elements, such as Ti and trivalent species such as Fe3+, Al and Cr3+. Experimental studies have shown clear evidence for coupled substitution of hydrogen with these species in simple systems (Berry et al. 2005; Berry et al. 2007a; Faul et al. 2016; Férot and Bolfan-Casanova 2012; Grant et al. 2007a, b; Kovacs et al. 2010), supported by atomistic simulations (Balan et al. 2011; Blanchard et al. 2017; Walker et al. 2007). Furthermore, the FTIR spectra produced by these studies reproduce well the dominant absorbance bands of natural olivine crystals (Berry et al. 2005; Schmädicke et al. 2013; Tollan et al. 2015). However, there are few studies reporting good correlations between trace element composition and water content in natural olivine crystals, leading some to conclude that such incorporation mechanisms are of limited importance (Gaetani et al. 2014; Withers and Hirschmann 2008).
Our study confirms a clear relationship between water content and the presence of Ti and Sc. Of particular significance are the correlations between Ti concentration, water concentration and water activity in [Ti] experiments at both fixed water activity and fixed bulk Ti concentration (Figs. 2 and 8). This indicates that Ti and H are being incorporated together as part of the same point defect, and that the formation of this defect is the dominant method of incorporating both Ti and H in the system studied. Note that Ti concentrations overlap the range found in natural mantle olivines (De Hoog et al. 2010). Extrapolating the trend of H2O concentration vs. Ti concentration to anhydrous conditions gives a Ti concentration of ~60 ppm (Fig. 8), which is close to the maximum solubility of Ti in olivine under anhydrous conditions at the same temperature and pressure, as estimated from the experimental study of Hermann et al. (2005). This illustrates how Ti solubility in olivine is dramatically increased at high water activities. Converting H2O and Ti to molar concentrations allows a further opportunity to assess the relationship between these two species (Fig. 9). The stoichiometry of Ti-clinohumite, MgTi(OH)2O2, requires two moles of OH (or one mole of H2O) for every mole of Ti. Taking [Ti] experiments with a constant water activity of 1 but variable Ti concentration in olivine, we find that the trend of the data is very close to the 1:1 line (moles of Ti to moles of H2O), consistent with this stoichiometry. Furthermore, this indicates that at water, saturated conditions \( \left({a}_{{\mathrm{H}}_2\mathrm{O}}=1\right) \) at 3 GPa and 1050 °C, essentially all Ti substitutes by the Ti-clinohumite point-defect mechanism, as opposed to tetrahedral Ti as in the Mg2TiO4 substitution (Berry et al. 2007b). The latter substitution becomes more important at higher temperatures and lower pressures (Hermann et al. 2005) and with increasing Fe content (O'Neill 1998). Finally, the good agreement between molar concentrations of Ti and H2O is a strong indication that the absorption coefficient of Bell et al. (2003) is reliable for this particular defect. The Bell value was determined using olivines with identical spectra to our [Ti] experiments. Likewise, the identical value obtained from [Ti] experiments by Kovacs et al. (2010) is also validated.
Variation of H2O concentration \( \left({C}_{{\mathrm{H}}_2\mathrm{O}}\right) \) with Ti concentration for [Ti] experiments conducted at variable \( {a}_{{\mathrm{H}}_2\mathrm{O}} \). The dashed line is a best fit through the deconvoluted data
Variation of molar H2O concentration with molar Ti concentration for [Ti] experiments conducted both at both constant \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) but variable at Ti concentration and variable in \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) and constant in bulk Ti concentration. The dashed line represents 1:1 molar concentrations
Similar trends can also be observed for the [triv] experiments, where water activity was held constant and bulk Sc concentration varied. In this case, there is a good positive correlation between Sc concentration and H2O concentration; however, when converted to molar concentrations, there is a deviation of approximately one third away from the expected Sc:H2O molar ratio, such that there is more water than expected, given the Sc concentration. This deviation is consistent with the results of the study by Berry et al. (2007a), implying that this is not isolated just to Sc but to all R 3+ cations (Fig. 10). We suggest that the simplest explanation for this is that the absorption coefficient used is not appropriate for this particular defect and should be increased by one third from the values reported by Bell et al. (2003) and Kovacs et al. (2010) and close to the value reported by Withers et al. (2012), in order to compensate for the excess water calculated here and by Berry et al. (2007a). This is also consistent with predictions from theoretical calculations for this type of defect (Blanchard et al. 2017), although the absolute value obtained in that study is somewhat unlikely. Since this defect is relatively common in natural olivines and thus a major contributor to mantle water storage (Tollan et al. 2015), an experimental study specifically designed to determine the absorption coefficient for this defect is required.
Variation of molar H2O concentration with molar Sc concentration for [triv] experiments conducted at constant water activity but variable in Sc concentration and at fixed bulk Sc concentration. Note that the latter experiments show no correlation between forsterite water concentration and Sc concentration, for reasons discussed in the text. For comparison, we have also plotted data from Berry et al. (2007a), which were similar experiments but at 1400 °C and 1.5 GPa and performed with a variety of trivalent cations in addition to Sc (Al, V, Ga, Y, In, Gd, Dy, Tm and Lu). Both sets of data confirm a consistent increase of \( {C}_{{\mathrm{H}}_2\mathrm{O}} \) with the availability of an R3+ cation at fixed water activity, but the deviation from the expected stoichiometry for the substitution may indicate that the absorption coefficient used in this study to calculate \( {C}_{{\mathrm{H}}_2\mathrm{O}} \) is in error; the value obtained by Withers et al. (2012) may be more appropriate (see text for further discussion)
A further result that contrasts with the [Ti] experiments is that there is no correlation between the very high concentrations of Sc and H2O concentration in the series at variable water activity, suggesting that another charge-balancing mechanism for Sc is operating in addition to the hydrous defect proposed here. Under anhydrous conditions, considerable Sc in olivine can be charge-balanced by an octahedral site vacancy (Spandler and O'Neill 2010) or, alternatively, Na+ could be incorporated from the salt-water solution and charge-balance Sc instead of H+. The latter would be increasingly important at lower water activities, where the NaCl/H2O ratio of the fluid increases substantially. But Grant and Wood (2010) found little evidence for coupled substitution of Na and Sc, so this reaction seems unlikely. It is possible that other charge-balancing species (such as Li) were present as contaminants in the salt; this would require further work to demonstrate.
Comparisons with other studies
Matveev et al. (2001) conducted experiments on natural olivines from Mount Porndon (initially Fo90–91) at a range of water activities, controlled by diluting H2O with CO2 at 1300 °C and 2 GPa. Although their experiments were designed to be M-type, the olivine partially recrystallised to some extent in all runs, and FTIR spectra were reported only for the recrystallised material, using the marked change in Fo content (to ~Fo96–97) as a discriminant. Their results are therefore for G-type experiments, which explain why they were able to obtain completely different FTIR spectra in orthopyroxene-buffered experiments from magnesiowüstite-buffered experiments, as in this study. They found a factor of 1.3 decrease in water concentration in olivine over the range in \( {X}_{{\mathrm{H}}_2\mathrm{O}} \) from 1.0 to 0.3, in four orthopyroxene-buffered experiments. Inspection of their spectra shows that the [triv] defect dominates the total absorbance, and we can therefore directly compare their results to ours for this defect, but with the caveat that the trivalent cations associated with the [triv] substitution in the natural olivines are likely Fe3+ or/and Cr3+ (cf. Tollan et al. 2015) rather than Sc. Considerable Fe3+ in their olivines is expected given the relatively high oxygen fugacities of their experiments, achieved using the Re–ReO2 buffer. The exponent for this defect derived from their data (following the same data-fitting procedure as for this study) is 0.58 ± 0.21, which is identical within uncertainty to the exponent calculated for the [triv] substitution in this study, supporting our hypothesis that the configurational entropy of the [triv] substitution is independent of the identity of the trivalent cation. The [triv] substitution was absent in the spectra from their magnesiowüstite-buffered experiments, although these were conducted at the same oxygen fugacity (Re–ReO2); this observation confirms the role of high \( {a}_{{\mathrm{SiO}}_2} \) in promoting this substitution, according to reaction (3). Their attribution of the [Si] substitution mechanism to the FTIR spectra of natural olivines published by Miller et al. (1987) was a mistake due to confusing the [Si] substitution in their experiments with the [Ti] substitution in the natural crystals. In the experiments, the Ti in the starting crystals of Mount Porndon olivine, which is low anyway (~20 ppm in an example analysed by Eggins et al. 1998), would have been lost during recrystallisation, while the infrared fingerprint of the [Ti] substitution was only recognised later (Berry et al. 2005).
Gaetani et al. (2014) performed what they assumed were M-type experiments on San Carlos olivine, but with re-equilibration of fO2-sensitive defects. Only the orthopyroxene-buffered condition was investigated. They used mixtures of H2O and CO2 to vary \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) but explored only three conditions \( \left({X}_{{\mathrm{H}}_2\mathrm{O}}=1,0.855,0.185\right) \). There is a factor of ~2.5 difference between the lowest and highest water concentrations. Given that their published FTIR spectra show the [Ti], [triv] and [Mg] substitutions, this is in good agreement with our results. Gaetani et al. (2014) also demonstrate a decrease in intensity of the [triv] peaks between experiments at high and low fO2, which suggests that the [triv] substitution was mainly associated with Fe3+, which re-equilibrated during the hydroxylation. Interestingly, in M-type experiments, such reduction may be mediated by the fugacity of H2 if the reaction occurs without diffusion of other elements:
$$ {\mathrm{Fe}}^{3+}{\mathrm{H}\mathrm{SiO}}_4+0.5\ {\mathrm{H}}_2={\mathrm{Fe}}^{2+}{\mathrm{H}}_2{\mathrm{SiO}}_4 $$
Here, the component Fe2+H2SiO4 represents essentially the same substitution mechanism in ferromagnesian olivines that we refer to as [Mg] in forsterite (Eq. 2). This reaction suggests that reduction of the Fe3+ [triv] substitution would increase the [Mg] substitution in M-type experiments, but as the olivines used by Gaetani et al. (2014) differed in composition, and the partitioning of \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\mathrm{Total}} \) amongst the three different substitution mechanisms was not reported, this hypothesis cannot yet be tested.
Yang et al. (2014) also used different ratios of H2O and CO2, finding a difference of a factor of 2 between values of \( {C}_{{\mathrm{H}}_2\mathrm{O}} \) in olivines from experiments with pure H2O and those with H2O–CO2, but changing XCO2 from 0.22–0.50 apparently had no effect on the water solubility. This observation disagrees with those of Matveev et al. (2001) and Gaetani et al. (2014). Otsuka and Karato (2011) studied the effect of variable \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) on water in San Carlos olivine in M-type experiments at 1000 °C and 5 GPa by using two three-phase assemblages involving a hydrous phase in the system MgO–SiO2–H2O, which produces known \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) in the absence of a fluid phase, plus one experiment with H2O fluid \( \left({a}_{{\mathrm{H}}_2\mathrm{O}} \sim 1\right) \). Thus, three values of \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) were investigated. They observed that \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\mathrm{Total}}\propto {a}_{{\mathrm{H}}_2\mathrm{O}} \) (i.e. the exponent was 1), but their published polarised infrared spectra show that the hydroxyl in their run products is held by a combination of both the [Si] and the [triv] mechanisms. Deconvolution of the spectra would be needed to see if \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{Si}\right]}\propto {a_{{\mathrm{H}}_2\mathrm{O}}}^2 \) and \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{triv}\right]}\propto {a_{{\mathrm{H}}_2\mathrm{O}}}^{0.5} \). The proportion of the [Si] substitution did increase with \( {a}_{{\mathrm{H}}_2\mathrm{O}} \), in agreement at least qualitatively with the results of this study. This interpretation is also supported by the reported changes in the anisotropy of the absorption with \( {a}_{{\mathrm{H}}_2\mathrm{O}} \).
Assignment of infrared absorption bands to substitution mechanisms
Our data also provides new evidence on the interpretation of infrared bands at wavenumbers greater than 3450 cm−1, which typically dominate the spectra of natural olivines. There has been debate in the literature concerning whether silicon vacancies or magnesium vacancies are the main host of water in olivine. Zhao et al. (2004), Mosenfelder et al. (2006), Kohlstedt (2006), Bali et al. (2008) and Otsuka and Karato (2011) suggest that water associated with Mg vacancies is the most important incorporation mechanism and is responsible for the generation of high wavenumber bands, based on relationships between water solubility and fH2O in experimental studies conducted over a broad range of conditions. A similar assignment has also been made by Smyth et al. (2006) through X-ray site occupancy refinement measurements on very water-rich samples equilibrated at high pressure and high \( {a}_{{\mathrm{H}}_2\mathrm{O}} \).
On the other hand, Matveev et al. (2001), Lemaire et al. (2004), Berry et al. (2005, 2007a), Walker et al. (2007), Kovacs et al. (2010), Balan et al. (2011), Umemoto et al. (2011) and Ingrin et al. (2013) concluded that high wavenumber bands are instead due to hydrated Si vacancies, based on experimental studies at a range of silica activities, atomistic simulations of different hydroxyl configurations and the response of absorption bands to heating from liquid nitrogen temperatures. Furthermore, Blanchard et al. (2009), Umemoto et al. (2011) and Ingrin et al. (2013) pointed out that the use of the empirical relationship between OH frequencies and O–O bond distances by Smyth et al. (2006) is not valid for predicting site occupancy of hydroxyl groups in nominally anhydrous phases, since such a correlation was established for hydrous phases. Most recently, Xue et al. (2017) used 1H NMR to determine the H positions in forsterite with 0.5 wt% H2O, synthesised at 1200 °C and 12 GPa. They concluded that "this study has provided unambiguous evidence supporting that hydrogen is incorporated in forsterite at relatively high pressure dominantly as (4H)Si defects, with (2H)M1 defects playing only a very minor role".
By showing again that the appropriate compositional variations yield forsterite (or olivine) with different O–H infrared spectra, the results presented here confirm these latter interpretations of the identities of the substitution mechanisms of H2O in olivine. The fact that the concentrations of hydroxyl, \( {C}_{{\mathrm{H}}_2\mathrm{O}} \), in these different substitutions depend on \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) in different ways that accord with the expected stoichiometry of the mechanisms (Eqs. 5, 6, 7 and 8) introduces a new line of supporting evidence for the identification of the substitution mechanisms, as does the variation of \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{Ti}\right]} \) with the concentration of Ti, and \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{triv}\right]} \) with the concentration of Sc (Fig. 10). The substitution mechanism associated with Mg site vacancies, [Mg], is quantitatively minor even at high \( {a}_{{\mathrm{SiO}}_2} \) (buffered by enstatite) and unobservable at low \( {a}_{{\mathrm{SiO}}_2} \) (buffered by magnesiowüstite), thereby refuting assertions that this mechanism is the main one by which H2O substitutes into olivine. Nevertheless, the results on the [Mg] substitution are of considerable theoretical interest, because they show that the concentration of H2O associated with this mechanism is the same at a given \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) in the Sc- and Ti-doped forsterites as in the undoped forsterite, despite the fact that the total concentrations of H2O \( \left({C}_{{\mathrm{H}}_2\mathrm{O}}^{\mathrm{Total}}\right) \) in the latter experiments are much higher (Fig. 2d). This confirms that the concentrations of H2O by the different mechanisms are additive, although from thermodynamic principals it is hard to conceive how this could be otherwise. Therefore, the total concentration of water in forsterite or olivine must be described by an equation of the type:
$$ {C}_{{\mathrm{H}}_2\mathrm{O}}^{\mathrm{Total}}={C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{Si}\right]}+{C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{Mg}\right]}+{C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{Ti}\right]}+{\displaystyle \sum_{{\mathrm{R}}^{3+}}}{C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{triv}\right]} $$
where the R3+ cations of importance in natural olivines are likely Fe3+ and Cr3+ (Tollan et al. 2015) and perhaps Al (Grant et al. 2007a).
For each substitution mechanism X, there is a separate equilibrium constant K[X] that relates \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{X}\right]} \) to f(H2O) and \( {a}_{{\mathrm{SiO}}_2} \) in different ways (Eqs. 5, 6, 7 and 8), and each K[X] is a function of temperature and pressure. In addition, the [Ti] and [triv] substitutions depend on the concentrations of Ti and relevant R3+ cations, hence the activities of TiO2 and the R3+O1.5 components in a system at global equilibrium. The R3+O1.5 components important in natural olivines are likely mainly Fe3+O1.5 and Cr3+O1.5 (Tollan et al. 2015; Blanchard et al. 2017), which are both redox-sensitive and depend on fO2. It might be tempting to develop a single constitutive equation for the solubility of water in olivine as a function of these variables, but the convenience of this should be balanced against the guarantee that it would produce the wrong answers (Ingrin et al. 2013). A sufficient equation necessitates the parameterisation of each term on the right-hand side of Eq. 11 separately as a function of their relevant variables.
Implications for the assessment of water incorporation in mantle olivine
Not addressed in this study are the effects of temperature and pressure on the individual substitution mechanisms, but it would be extraordinary if the solubility of water in the four different mechanisms did not respond very differently to these variables. From the thermodynamic perspective, the effect of pressure acts in three ways at a given \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) and \( {a}_{{\mathrm{SiO}}_2} \): (1) the increase of f(H2O)° with pressure; (2) the effect that f(H2O)° has through the stoichiometries of the reactions (reactions 1 to 4 and Eqs. 5, 6, 7 and 8); and (3) the effect on the equilibrium constants (Eqs. 5, 6, 7 and 8). As regards this third factor, the effect of pressure at constant f(H2O) (NB not constant f(H2O)°) is:
$$ \varDelta \ln\ \mathrm{K}={\displaystyle \underset{\mathrm{P}1}{\overset{{\mathrm{P}}_2}{\int }}\varDelta \overline{\mathrm{V}}}\mathrm{solids}\left(\mathrm{P},\mathrm{T}\right)\mathrm{d}\mathrm{P} $$
where \( \varDelta {\overline{\mathrm{V}}}_{\mathrm{solids}} \) is the change in partial molar volume of the solid components. For the example of the [Si] mechanism (reaction 1), \( \varDelta {\overline{\mathrm{V}}}_{\mathrm{solids}} \) is given by:
$$ \varDelta {\overline{\mathrm{V}}}_{\mathrm{solids}}^{\left[\mathrm{Si}\right]}={\overline{\mathrm{V}}}_{\mathrm{M}{\mathrm{g}}_2{\mathrm{H}}_4{\mathrm{O}}_4}^{\mathrm{olivine}}-{\mathrm{V}}_{\mathrm{M}{\mathrm{g}}_2\mathrm{Si}{\mathrm{O}}_4}^{\mathrm{olivine}}-{\mathrm{V}}_{\mathrm{Si}{\mathrm{O}}_2} $$
Here, \( {\overline{\mathrm{V}}}_{\mathrm{M}{\mathrm{g}}_2{\mathrm{H}}_4{\mathrm{O}}_4}^{\mathrm{olivine}} \) is the partial molar volume of the Mg2H4O4 component in the olivine, and \( {V}_{{\mathrm{SiO}}_2} \) is the effective molar volume of the system, as determined by the coexisting phases. The standard states are the conventional ones of the pure components at the P, T of interest for the solid phases, and pure H2O as an ideal gas at T and 1 bar. The value of \( {\overline{\mathrm{V}}}_{\mathrm{M}{\mathrm{g}}_2{\mathrm{H}}_4{\mathrm{O}}_4}^{\mathrm{olivine}} \) (298,1) may be calculated from the data in Smyth et al. (2006) as 45.2 cm3/mol, by recalculating the H2O contents of hydrous forsterites given in Table 2 to mole fractions of the Mg2H4O4 component, and extrapolating linearly to this end-member. Hence, with \( {V}_{{\mathrm{SiO}}_2} \) (298,1) = 19.0 cm3/mol (from Mg2SiO4 + Mg2Si2O6) and \( {\mathrm{V}}_{\mathrm{M}{\mathrm{g}}_2\mathrm{Si}{\mathrm{O}}_4}^{\mathrm{olivine}} \) (298,1) = 43.7 cm3/mol, all from Holland and Powell (2011), we calculate \( \varDelta {\overline{V}}_{\mathrm{solids}}^{\left[\mathrm{Si}\right]}\left(298,1\right)=20.5\ {\mathrm{cm}}^3/\mathrm{mol} \). However, what is really needed is to compare the response of the [Si] substitution to pressure with those of [Mg], [Ti] and [triv], which requires the partial molar volumes of the components MgH2SiO4 and MgTiH2O4, and the relevant R 3+HSiO4 components, which are at present unknown.
In contrast, the differential effect of the change in f(H2O)° with pressure comes simply from the stoichiometry of the reaction: increasing pressure will favour the [Si] substitution, because \( {C}_{{\mathrm{H}}_2\mathrm{O}}\propto \mathrm{f}{\left({\mathrm{H}}_2\mathrm{O}\right)}^2 \). That the [Si] mechanism is indeed the dominant mechanism in experiments with \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) ~1 at P >> 3 GPa, even at high \( {{\mathrm{a}}_{\mathrm{SiO}}}_{{}_2} \) (buffered by orthopyroxene) is evident from several experimental studies (Kohlstedt et al. 1996; Mosenfelder et al. 2006; Withers and Hirschmann 2008). However, decreasing \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) at any pressure and temperature will decrease the proportion of [Si] relative to [Mg], [Ti] and especially [triv]. Higher temperatures should favour the mechanisms with higher configurational entropy, if differences in vibrational entropies are of subsidiary importance (Walker et al. 2007). With this assumption, the relative amounts of water associated with the different defects should increase with increasing temperature in the reverse order from the effect of pressure at \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) ~1: [triv] > [Mg] ~ [Ti] > [Si] (Eqs. 1, 2, 3 and 4), other variables remaining constant.
We reiterate the point made by Ingrin et al. (2013), who emphasised that water incorporation into olivine cannot be described by a single solubility law. The existence of the four main solubility mechanisms with their different controlling stoichiometries mandates that the relationship between \( {C}_{{\mathrm{H}}_2\mathrm{O}} \) and the intensive thermodynamic variables of temperature, pressure, fH2O and compositional variables requires four separate terms, one for each mechanism. Quantifying water substitution in olivine must also allow for the availability of Ti and the trivalent cations, particularly Fe3+, whose concentration is sensitive to fO2. Varying \( {a_{\mathrm{SiO}}}_{{}_2} \) also affects substitution mechanisms in different ways. Obviously, increasing \( {a_{\mathrm{SiO}}}_{{}_2} \) increases Mg vacancies but decreases Si vacancies and vice versa, but increasing \( {a}_{{\mathrm{SiO}}_2} \) should also promote the [triv] substitution but decrease [Ti] (Eqs. 3 and 4). The values of \( {a}_{{\mathrm{SiO}}_2} \) as buffered at low or high values by magnesiowüstite or pyroxene, respectively, also both change with P and T. The effect of the major element compositional variable Mg/Fe2+ on each mechanism also requires systematic study.
The fact that there are three ways by which increasing pressure at constant given \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) and relative \( {a}_{{\mathrm{SiO}}_2} \) changes \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{X}\right]} \) for each of the four substitution mechanisms means that it is difficult to deduce the nature of the substitution mechanism from the change of \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\left[\mathrm{X}\right]} \) with pressure; it would be even more difficult to deduce how water dissolves in olivine with the change of \( {C}_{{\mathrm{H}}_2\mathrm{O}}^{\mathrm{Total}} \) with pressure.
Interactions between the four defect mechanisms should also be taken into account in the interpretation of the water contents of natural olivines and their FTIR spectra. The speciation of water in olivine observed in simple-system experiments may not be preserved in natural olivines because of rearrangements on cooling and/or decompression. Where this occurs without mass transfer of components into or out of the olivine by diffusion, such rearrangements may be rapid. To give one example, it is possible that water may transfer from the [triv] mechanism to the [Ti] mechanism according to the reaction:
$$ 4/3{R}^{3+}{\mathrm{HSiO}}_4 + 2/3\ {\mathrm{Mg}}_2{\mathrm{TiO}}_4 = {R^{3+}}_{4/3}{\mathrm{SiO}}_4 + 2/3\ {\mathrm{Mg}\mathrm{TiH}}_2{\mathrm{O}}_4 + 1/3\ {\mathrm{Mg}}_2{\mathrm{SiO}}_4 $$
where all components are in olivine. The R 3+ 4/3SiO4 component is the anhydrous substitution of R3+ into olivine, charge-balanced by octahedral site vacancies (Evans et al. 2008), while the Mg2TiO4 component is the anhydrous substitution of Ti for Si in olivine studied by Hermann et al. (2005). Therefore, the observation that [Ti] is the dominant mechanism for water incorporation in olivine from San Carlos xenoliths (Berry et al. 2005) may reflect internal re-equilibration during cooling to some extent as yet unknown. Loss of water from olivine, which has often been inferred from both phenocryst olivines in magmas during eruption, or from olivines in xenoliths during their exhumation (e.g. Tollan et al. 2015 and references therein), may also affect speciation. Water loss changes \( {a}_{{\mathrm{SiO}}_2} \) (inter alia), hence the point-defect structure of the olivine, in different ways according to the different substitution mechanisms (Eqs. 1, 2, 3 and 4). The effects of this are further complicated by the different rates of diffusion of H associated with the different substitution mechanisms (Padrón-Navarta et al. 2014).
This study presents the results of G-type experiments at one condition of temperature and pressure, namely 1050 °C and 3 GPa, which confirm the identities of the four main substitution mechanisms by which water is likely to be incorporated into mainstream mantle olivines. The results demonstrate the role that \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) plays in determining which mechanisms are significant at the experimental condition, but from the stoichiometries of the substitution mechanisms, the same principle should apply at other pressures and temperatures. Evaluating water solubilities is not straightforward because it depends on the mechanisms by which water is incorporated (the type of hydrous defect), which in turn is dictated by other factors, most notably composition and silica activity. The consequence of this is that under otherwise identical conditions, higher water activities will favour hydrated silicon vacancies ([Si]) over other defects. Conversely, at low water activities, the ratio of hydrated silicon vacancies to other hydrous defects will be much lower. This has profound impacts for our understanding of water storage in the mantle and interpretation of measurements of natural olivine from different mantle domains. For example, the absence of the infrared absorption bands associated with the [Si] mechanism does not necessarily indicate high silica activity but could be due to low water activities, as expected in most mantle environments (Bali et al. 2008; Lamb and Popp 2009). This is highlighted by our observation that the [Si] substitution features prominently in enstatite-buffered experiments at high pressures and high a H2O, even dominating total absorbance (Smyth et al. 2006). In order to understand the role of water in the mantle, careful consideration needs to be given to several factors that have hitherto been underappreciated, including the limitations on \( {a}_{{\mathrm{H}}_2\mathrm{O}} \) and the role of minor components, especially Ti and the more abundant R3+ cations, Fe3+ and Cr3+.
Aranovich LY, Newton RC (1996) H2O activity in concentrated NaCl solutions at high pressures and temperatures measured by the brucite-pericalse equilibrium. Contrib Mineral Petrol 125:200–212
Aubaud C et al (2007) Intercalibration of FTIR and SIMS for hydrogen measurements in glasses and nominally anhydrous minerals. Am Mineral 92(5–6):811–828
Bai Q, Kohlstedt DL (1993) Effects of chemical environment on the solubility and incorporation mechanism for hydrogen in olivine. Phys Chem Miner 19:460–471
Balan E, Ingrin J, Delattre S, Kovács I, Blanchard M (2011) Theoretical infrared spectrum of OH-defects in forsterite. Eur J Mineral 23(3):285–292
Bali E, Bolfan-Casanova N, Koga KT (2008) Pressure and temperature dependence of H solubility in forsterite: an implication to water activity in the Earth interior. Earth Planet Sci Lett 268(3–4):354–363
Bell DR, Rossman GR, Maldener J, Endisch D, Rauch F (2003) Hydroxide in olivine: a quantitative determination of the absolute amount and calibration of the IR spectrum. J Geophys Res 108
Berry AJ, Hermann J, O'Neill HSC, Foran GJ (2005) Fingerprinting the water site in mantle olivine. Geology 33(11):869
Berry AJ, O'Neill HSC, Hermann J, Scott DR (2007a) The infrared signature of water associated with trivalent cations in olivine. Earth Planet Sci Lett 261(1–2):134–142
Berry AJ, Walker AM, Hermann J, O'Neill HSC, Foran GJ, Gale JD (2007b) Titanium substitution mechanisms in forsterite. Chem Geol 242(1–2):176–186
Blanchard M, Balan E, Wright K (2009) Incorporation of water in iron-free ringwoodite: a first-principles study. Am Mineral 94(1):83–89
Blanchard M, Ingrin J, Balan E, Kovács I, Withers AC (2017) Effect of iron and trivalent cations on OH defects in olivine. Am Mineral 102(2):302–311
De Hoog JCM, Gall L, Cornell DH (2010) Trace-element geochemistry of mantle olivine and application to mantle petrogenesis and geothermobarometry. Chem Geol 270(1–4):196–215
Demouchy S, Mackwell S (2006) Mechanisms of hydrogen incorporation and diffusion in iron-bearing olivine. Phys Chem Miner 33:347–355
Demouchy S, Mackwell SJ, Kohlstedt DL (2007) Influence of hydrogen on Fe–Mg interdiffusion in (Mg, Fe)O and implications for Earth's lower mantle. Contrib Mineral Petrol 154(3):279–289
Denis CMM, Demouchy S, Shaw CSJ (2013) Evidence of dehydration in peridotites from Eifel Volcanic Field and estimates of the rate of magma ascent. J Volcanol Geotherm Res 258:85–99
Eggins SM, Rudnick RL, McDonough WF (1998) The composition of peridotites and their minerals: a laser-ablation ICP-MS study. Earth Planet Sci Lett 154:53–71
Evans TM, O'Neill HSC, Tuff J (2008) The influence of melt composition on the partitioning of REEs, Y, Sc, Zr and Al between forsterite and melt in the system CMAS. Geochim Cosmochim Acta 72:5708–5721
Faul UH, Cline CJ, David EC, Berry AJ, Jackson I (2016) Titanium-hydroxyl defect-controlled rheology of the Earth's upper mantle. Earth Planet Sci Lett 452:227–237
Férot A, Bolfan-Casanova N (2012) Water storage capacity in olivine and pyroxene to 14 GPa: implications for the water content of the Earth's upper mantle and nature of seismic discontinuities. Earth Planet Sci Lett 349–350:218–230
Gaetani GA et al (2014) Hydration of mantle olivine under variable water and oxygen fugacity conditions. Contrib Mineral Petrol 167(2):965
Grant KJ, Wood BJ (2010) Experimental study of the incorporation of Li, Sc, Al and other trace elements into olivine. Geochim Cosmochim Acta 74(8):2412–2428
Grant KJ, Kohn SC, Brooker RA (2007a) The partitioning of water between olivine, orthopyroxene and melt synthesised in the system albite-forsterite-H20. Earth and Planet Sci 260: 227–241.
Grant KJ, Brooker RA, Kohn SC, Wood BJ (2007b) The effect of oxygen fugacity on hydroxyl concentrations and speciation in olivine: implications for water solubility in the upper mantle. Earth Planet Sci Lett 261(1–2):217–229
Green DH, Hibberson WO, Kovacs I, Rosenthal A (2010) Water and its influence on the lithosphere-asthenosphere boundary. Nature 467(7314):448–451
Hermann J, O'Neill HSC, Berry AJ (2005) Titanium solubility in olivine in the system TiO2-MgO-SiO2: no evidence for an ultra-deep origin of Ti-bearing olivine. Contrib Mineral Petrol 148:746–760
Holland TJB, Powell R (2011) An improved and extended internally consistent thermodynamic dataset for phases of petrological interest, involving a new equation of state for solids. J Metamorph Geol 29:333–383
Ingrin J et al (2013) Low-temperature evolution of OH bands in synthetic forsterite, implication for the nature of H defects at high pressure. Phys Chem Miner 40(6):499–510
Ingrin J, Kovács I, Deloule E, Balan E, Blanchard M, Kohn SC, Hermann J (2014) Identification of hydrogen defects linked to boron substitution in synthetic forsterite and natural olivine. Am Mineral 99:2138–2141
Jollands MC, Padrón-Navarta JA, Hermann J, O'Neill HSC (2016) Hydrogen diffusion in Ti-doped forsterite and the preservation of metastable point defects. Am Mineral 101(7–8):1571–1583
Karato S-I, Wang D (2013) Electrical conductivity of minerals and rocks. In: Karato S-I (ed) Physics and Chemistry of the Deep Earth. Wiley-Blackwell, New York, pp 145–182
Karato S-I, Paterson MS, FitzGerald JD (1986) Rheology of synthetic olivine aggregates: Influence of grain size and water. J Geophys Res 91(B8):8151
Kohlstedt DL (2006) The role of water in high-temperature rock deformation. Rev Mineral Geochem 62:377–396
Kohlstedt DL, Keppler H, Rubie DC (1996) Solubility of water in the α, β and γ phases of (Mg, Fe)2SiO4. Contrib Mineral Petrol 123:345–357
Kovacs I et al (2008) Quantitative absorbance spectroscopy with unpolarized light: Part II. Experimental evaluation and development of a protocol for quantitative analysis of mineral IR spectra. Am Mineral 93(5–6):765–778
Kovacs I, O'Neill HSC, Hermann J, Hauri EH (2010) Site-specific infrared O-H absorption coefficients for water substitution into olivine. Am Mineral 95(2–3):292–299
Lamb WM, Popp RK (2009) Amphibole equilibria in mantle rocks: determining values of mantle aH2O and implications for mantle H2O contents. Am Mineral 94(1):41–52
Lemaire C, Kohn SC, Brooker RA (2004) The effect of silica activity on the incorporation mechanisms of water in synthetic forsterite: a polarised infrared spectroscopic study. Contrib Mineral Petrol 147(1):48–57
Mackwell SJ, Kohlstedt DL (1990) Diffusion of hydrogen in olivine: implications for water in the mantle. J Geophys Res 95:5079–5088
Mallmann G, O'Neill HSC (2009) The crystal/melt partitioning of V during mantle melting as a function of oxygen fugacity compared with some other elements (Al, P, Ca, Sc, Ti, Cr, Fe, Ga, Y, Zr and Nb). J Petrol 50(9):1765–1794
Matveev S, O'Neill HSC, Ballhaus C, Taylor WR, Green DH (2001) Effect of silica activity on OH− IR spectra of olivine: implications for low-aSiO2 mantle metasomatism. J Petrol 42(4):721–729
Mei S, Kohlstedt DL (2000) Influence of water on plastic deformation of olivine aggregates: 1. Diffusion creep regime. J Geophys Res Solid Earth 105(B9):21457–21469
Miller GH, Rossman GR, Harlow GE (1987) The natural occurrence of hydroxide in olivine. Phys Chem Miner 14:461–472
Mosenfelder JL, Deligne NI, Asmiow PD, Rossman GR (2006) Hydrogen incorporation in olivine from 2-12 GPa. Am Mineral 91:285–294
Mosenfelder JL et al (2011) Analysis of hydrogen in olivine by SIMS: evaluation of standards and protocol. Am Mineral 96(11–12):1725–1741
O'Neill HSC (1998) Partitioning of Fe and Mn between ilmenite and olivine at 1100 °C: constraints on the thermodynamic mixing properties of (Fe, Mn)TiO3 ilmenite solid solutions. Contrib Mineral Petrol 133:284–296
Otsuka K, Karato S-I (2011) Control of the water fugacity at high pressures and temperatures: applications to the incorporation mechanisms of water in olivine. Phys Earth Planet Inter 189(1–2):27–33
Padrón-Navarta JA, Hermann J, O'Neill HSC (2014) Site-specific hydrogen diffusion rates in forsterite. Earth Planet Sci Lett 392:100–112
Pownceby MI, O'Neill HSC (2000) Thermodynamic data from redox reactions at high temperatures. VI. Thermodynamic properties of CoO–MnO solid solutions from emf measurements. Contrib Mineral Petrol 140(1):28–39
Sambridge M, Gerald JF, Kovacs I, O'Neill HSC, Hermann J (2008) Quantitative absorbance spectroscopy with unpolarized light: Part I. Physical and mathematical development. Am Mineral 93(5–6):751–764
Schmädicke E, Gose J, Witt-Eickschen G, Bratz H (2013) Olivine from spinel peridotite xenoliths: hydroxyl incorporation and mineral composition. Am Mineral 98(10):1870–1880
Shen T, Hermann J, Zhang L, Padrón-Navarta JA, Chen J (2014) FTIR spectroscopy of Ti-chondrodite, Ti-clinohumite, and olivine in deeply subducted serpentinites and implications for the deep water cycle. Contrib Mineral Petrol 167:992–1009
Smyth JR, Frost DJ, Nestola F, Holl CM, Bromiley G (2006) Olivine hydration in the deep upper mantle: effects of temperature and silica activity. Geophys Res Lett 33(15)
Soustelle V, Tommasi A, Demouchy S, Ionov DA (2010) Deformation and fluid-rock interaction in the supra-subduction mantle: microstructures and water contents in peridotite xenoliths from the Avacha Volcano, Kamchatka. J Petrol 51(1–2):363–394
Soustelle V, Tommasi A, Demouchy S, Franz L (2013) Melt-rock interactions, deformation, hydration and seismic properties in the sub-arc lithospheric mantle inferred from xenoliths from seamounts near Lihir, Papua New Guinea. Tectonophysics 608:330–345
Spandler C, O'Neill HSC (2010) Diffusion and partition coefficients of minor and trace elements in San Carlos olivine at 1,300 °C with some geochemical implications. Contrib Mineral Petrol 159(6):791–818
Tollan PME, O'Neill HSC, Hermann J, Benedictus A, Arculus RJ (2015) Frozen melt–rock reaction in a peridotite xenolith from sub-arc mantle recorded by diffusion of trace elements and water in olivine. Earth Planet Sci Lett 422:169–181
Umemoto K, Wentzcovitch RM, Hirschmann MM, Kohlstedt DL, Withers AC (2011) A first-principles investigation of hydrous defects and IR frequencies in forsterite: The case for Si vacancies. Am Mineral 96(10):1475–1479
Walker AM, Hermann J, Berry AJ, O'Neill HSC (2007) Three water sites in upper mantle olivine and the role of titanium in the water weakening mechanism. J Geophys Res 112(B5)
Withers AC, Hirschmann MM (2008) Influence of temperature, composition, silica activity and oxygen fugacity on the H2O storage capacity of olivine at 8 GPa. Contrib Mineral Petrol 156(5):595–605
Withers AC, Bureau H, Raepsaet C, Hirschmann MM (2012) Calibration of infrared spectroscopy by elastic recoil detection analysis of H in synthetic olivine. Chem Geol 334:92–98
Xue X, Kanzaki M, Turner D, Loroch D (2017) Hydrogen incorporation mechanisms in forsterite: New insights from 1H and 29Si NMR spectroscopy and first-principles calculation. Am Mineral 102:519–536
Yang X, Liu D, Xia Q (2014) CO2-induced small water solubility in olivine and implications for properties of the shallow mantle. Earth Planet Sci Lett 403:37–47
Zhao Y-H, Ginsberg SB, Kohlstedt DL (2004) Solubility of hydrogen in olivine: dependence on temperature and iron content. Contrib Mineral Petrol 147:155–161
We thank Dean Scott and Dave Clark for their tireless efforts maintaining the running of the experimental laboratories, Jung Park for his assistance with the LA-ICP-MS measurements and both Mike Jollands and Alberto Padrόn-Navarta for their assistance with the FTIR and many valuable and fruitful discussions. We thank the two anonymous reviewers and Shun-ichiro Karato for his review and editorial handling.
We gratefully acknowledge the Australian Research Council (ARC) support through DP110103134 to JH and HON, and FL130100066 to HON, which partly supported PT during the final stages of this project. RS acknowledges an Australian Postgraduate Award.
The project was conceived by HON and JH, with the experiments and analyses conducted by RS. PT processed and modelled the data. PT and HON fit the data to the thermodynamic model. PT and HON wrote the manuscript with help from JH. All authors read and approved the final manuscript.
Research School of Earth Sciences, The Australian National University, Building 142, Mills Road, Canberra, ACT, 2601, Australia
Peter M. E. Tollan
, Rachel Smith
, Hugh St.C. O'Neill
& Jörg Hermann
Institute of Geological Sciences, Universität Bern, Bern, 3012, Switzerland
Search for Peter M. E. Tollan in:
Search for Rachel Smith in:
Search for Hugh St.C. O'Neill in:
Search for Jörg Hermann in:
Correspondence to Hugh St.C. O'Neill.
Tollan, P.M.E., Smith, R., O'Neill, H.S. et al. The responses of the four main substitution mechanisms of H in olivine to H2O activity at 1050 °C and 3 GPa. Prog. in Earth and Planet. Sci. 4, 14 (2017) doi:10.1186/s40645-017-0128-7
Nominally anhydrous minerals
Substitution mechanism
Point defect
Water in the mantle
4. Solid earth sciences | CommonCrawl |
3D deformation measurement in digital holographic interferometry using a multitask deep learning architecture
Krishna Sumanth Vengala, Naveen Paluru, and Rama Krishna Sai Subrahmanyam Gorthi
Krishna Sumanth Vengala,1 Naveen Paluru,2 and Rama Krishna Sai Subrahmanyam Gorthi1,*
1Department of Electrical Engineering, Indian Institute of Technology, Tirupati 517506, India
2Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012, India
*Corresponding author: [email protected]
Rama Krishna Sai Subrahmanyam Gorthi https://orcid.org/0000-0001-5021-0071
K Vengala
N Paluru
R Subrahmanyam Gorthi
Krishna Sumanth Vengala, Naveen Paluru, and Rama Krishna Sai Subrahmanyam Gorthi, "3D deformation measurement in digital holographic interferometry using a multitask deep learning architecture," J. Opt. Soc. Am. A 39, 167-176 (2022)
Image Processing and Image Analysis
Holographic interferometry
Medical image processing
Phase estimation
Phase unwrapping
Three dimensional measurement
Original Manuscript: October 7, 2021
Revised Manuscript: November 28, 2021
The extraction of absolute phase from an interference pattern is a key step for 3D deformation measurement in digital holographic interferometry (DHI) and is an ill-posed problem. Estimating the absolute unwrapped phase becomes even more challenging when the obtained wrapped phase from the interference pattern is noisy. In this paper, we propose a novel multitask deep learning approach for phase reconstruction and 3D deformation measurement in DHI, referred to as TriNet, that has the capability to learn and perform two parallel tasks from the input image. The proposed TriNet has a pyramidal encoder–two-decoder framework for multi-scale information fusion. To our knowledge, TriNet is the first multitask approach to accomplish simultaneous denoising and phase unwrapping of the wrapped phase from the interference fringes in a single step for absolute phase reconstruction. The proposed architecture is more elegant than recent multitask learning methods such as Y-Net and state-of-the-art segmentation approaches such as UNet$++$. Further, performing denoising and phase unwrapping simultaneously enables deformation measurement from the highly noisy wrapped phase of DHI data. The simulations and experimental comparisons demonstrate the efficacy of the proposed approach in absolute phase reconstruction and 3D deformation measurement with respect to the existing conventional methods and state-of-the-art deep learning methods.
Dynamic displacement measurement in digital holographic interferometry using eigenspace analysis
Jagadesh Ramaiah and Rajshekhar Gannavarpu
High-speed measurement of mechanical micro-deformations with an extended phase range using dual-wavelength digital holographic interferometry
Natalia Munera, Carlos Trujillo, and Jorge Garcia-Sucerquia
Appl. Opt. 61(5) B279-B286 (2022)
Simultaneous measurement of displacement, strain and curvature in digital holographic interferometry using high-order instantaneous moments
Sai Siva Gorthi and Pramod Rastogi
Particle-filter-based phase estimation in digital holographic interferometry
Rahul G. Waghmare, P. Ram Sukumar, G. R. K. S. Subrahmanyam, Rakesh Kumar Singh, and Deepak Mishra
Automated phase unwrapping in digital holography with deep learning
Seonghwan Park, Youhyun Kim, and Inkyu Moon
Biomed. Opt. Express 12(11) 7064-7081 (2021)
E. Cuche, F. Bevilacqua, and C. Depeursinge, "Digital holography for quantitative phase-contrast imaging," Opt. Lett. 24, 291–293 (1999).
T. R. Judge and P. Bryanston-Cross, "A review of phase unwrapping techniques in fringe analysis," Opt. Lasers Eng. 21, 199–239 (1994).
R. G. Waghmare, D. Mishra, G. S. Subrahmanyam, E. Banoth, and S. S. Gorthi, "Signal tracking approach for phase estimation in digital holographic interferometry," Appl. Opt. 53, 4150–4157 (2014).
K. Wang, Y. Li, Q. Kemao, J. Di, and J. Zhao, "One-step robust deep learning phase unwrapping," Opt. Express 27, 15100–15115 (2019).
Q. Li, C. Bao, J. Zhao, and Z. Jiang, "A new fast quality-guided flood-fill phase unwrapping algorithm," J. Phys. Conf. Ser. 1069, 012182 (2018).
S. V. D. Jeught, J. Sijbers, and J. J. Dirckx, "Fast Fourier-based phase unwrapping on the graphics processing unit in real-time imaging applications," J. Imaging 1, 31–44 (2015).
V. V. Volkov and Y. Zhu, "Deterministic phase unwrapping in the presence of noise," Opt. Lett. 28, 2156–2158 (2003).
M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and Q. Kemao, "Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies," Appl. Opt. 50, 6214–6224 (2011).
S. S. Gorthi, G. Rajshekhar, and P. Rastogi, "Strain estimation in digital holographic interferometry using piecewise polynomial phase approximation based method," Opt. Express 18, 560–565 (2010).
R. G. Waghmare, P. R. Sukumar, G. R. K. S. Subrahmanyam, R. K. Singh, and D. Mishra, "Particle-filter-based phase estimation in digital holographic interferometry," J. Opt. Soc. Am. A 33, 326–332 (2016).
R. G. Waghmare, R. S. S. Gorthi, and D. Mishra, "Wrapped statistics-based phase retrieval from interference fringes," J. Mod. Opt. 63, 1384–1390 (2016).
G. Spoorthi, S. Gorthi, and R. K. S. S. Gorthi, "PhaseNet: a deep convolutional neural network for two-dimensional phase unwrapping," IEEE Signal Process. Lett. 26, 54–58 (2018).
G. Spoorthi, R. K. S. S. Gorthi, and S. Gorthi, "PhaseNet 2.0: phase unwrapping of noisy data based on deep learning approach," IEEE Trans. Image Process. 29, 4862–4872 (2020).
V. K. Sumanth and R. K. S. S. Gorthi, "A deep learning framework for 3D surface profiling of the objects using digital holographic interferometry," in IEEE International Conference on Image Processing (ICIP) (IEEE, 2020), pp. 2656–2660.
Z. Ren, Z. Xu, and E. Y. Lam, "End-to-end deep learning framework for digital holographic reconstruction," Adv. Photon. 1, 016004 (2019).
K. Wang, J. Dou, Q. Kemao, J. Di, and J. Zhao, "Y-Net: a one-to-two deep learning framework for digital holographic reconstruction," Opt. Lett. 44, 4765–4768 (2019).
Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, "UNet++: designing skip connections to exploit multiscale features in image segmentation," IEEE Trans. Med. Imaging 39, 1856–1867 (2019).
O. Ronneberger, P. Fischer, and T. Brox, "U-Net: convolutional networks for biomedical image segmentation," in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2015), pp. 234–241.
V. Badrinarayanan, A. Kendall, and R. Cipolla, "SegNet: a deep convolutional encoder-decoder architecture for image segmentation," IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).
S. Mehta, E. Mercan, J. Bartlett, D. Weaver, J. G. Elmore, and L. Shapiro, "Y-Net: joint segmentation and classification for diagnosis of breast biopsy images," in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2018), pp. 893–901.
S. Ioffe and C. Szegedy, "Batch normalization: accelerating deep network training by reducing internal covariate shift," in International Conference on Machine Learning (PMLR, 2015), pp. 448–456.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," in Advances in Neural Information Processing Systems (2012), pp. 1097–1105.
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "PyTorch: an imperative style, high-performance deep learning library," in Advances in Neural Information Processing Systems (2019), pp. 8026–8037.
D. Kingma and J. Ba, "Adam: a method for stochastic optimization," in 3rd International Conference on Learning Representations, San Diego, California (2014).
F. Lovergine, S. Stramaglia, G. Nico, and N. Veneziani, "Fast weighted least squares for solving the phase unwrapping problem," in IEEE International Geoscience and Remote Sensing Symposium. IGARSS'99 (Cat. No. 99CH36293) (IEEE, 1999), Vol. 2, pp. 1348–1350.
Antiga, L.
Asundi, A.
Ba, J.
Badrinarayanan, V.
Bai, J.
Banoth, E.
Bao, C.
Bartlett, J.
Bevilacqua, F.
Bradbury, J.
Brox, T.
Bryanston-Cross, P.
Chanan, G.
Chilamkurthy, S.
Chintala, S.
Cipolla, R.
Cuche, E.
Depeursinge, C.
Desmaison, A.
DeVito, Z.
Di, J.
Dirckx, J. J.
Dou, J.
Elmore, J. G.
Fang, L.
Fischer, P.
Gimelshein, N.
Gorthi, R. K. S. S.
Gorthi, R. S. S.
Gorthi, S.
Gorthi, S. S.
Gross, S.
Hinton, G. E.
Ioffe, S.
Jeught, S. V. D.
Jiang, Z.
Judge, T. R.
Kemao, Q.
Kendall, A.
Killeen, T.
Kingma, D.
Köpf, A.
Krizhevsky, A.
Lam, E. Y.
Lerer, A.
Li, Y.
Liang, J.
Lin, Z.
Lovergine, F.
Massa, F.
Mehta, S.
Mercan, E.
Mishra, D.
Nico, G.
Paszke, A.
Raison, M.
Rajshekhar, G.
Rastogi, P.
Ren, Z.
Ronneberger, O.
Shapiro, L.
Siddiquee, M. M. R.
Sijbers, J.
Singh, R. K.
Spoorthi, G.
Steiner, B.
Stramaglia, S.
Su, X.
Subrahmanyam, G. R. K. S.
Subrahmanyam, G. S.
Sukumar, P. R.
Sumanth, V. K.
Sutskever, I.
Szegedy, C.
Tajbakhsh, N.
Tejani, A.
Veneziani, N.
Volkov, V. V.
Waghmare, R. G.
Wang, K.
Weaver, D.
Xu, Z.
Yang, E.
Zhang, Q.
Zhao, J.
Zhao, M.
Zhou, Z.
Zhu, Y.
Adv. Photon. (1)
IEEE Signal Process. Lett. (1)
IEEE Trans. Image Process. (1)
IEEE Trans. Med. Imaging (1)
IEEE Trans. Pattern Anal. Mach. Intell. (1)
J. Imaging (1)
J. Mod. Opt. (1)
J. Phys. Conf. Ser. (1)
Opt. Lasers Eng. (1)
Fig. 1. Flow diagram of the proposed approach for two-dimensional phase unwrapping. The architectural details of the proposed TriNet are shown in Fig. 2.
Fig. 2. TriNet for two-dimensional phase unwrapping. Right wing of the architecture predicts the fringe order in terms of segmentation labels. Left wing of the architectures denoises the noisy input wrapped phase. The actual 3D profile is reconstructed by adding a denoised phase pattern to $2\pi \times$ the predicted fringe order. Note that lightweight arrows represent dense connections, and strong vertical arrows represent downsample ($\times 2$) operation.
Fig. 3. Example of synthetic sample $f(x,y)$ generated for this work. The governing equation of $f(x,y)$ is given in Eq. (2), and that of wrapped phase is in Eq. (3).
Fig. 4. Convergence of all the loss curves employed for training the architecture. (a) Loss curve for denoising branch, (b) loss curve for segmentation branch, and (c) total loss curve.
Fig. 5. Noisy wrapped phase along with corresponding 3D profiles (ground truths) of two random test samples. The colorbar shows the ground truth deformation profile of the objects in radiance as related by Eq. (1).
Fig. 6. Complete ablation study for both decoder wings of the architecture and the UNet$++$ architecture directly predicting the depth profile in radians through regression. The output of the left side decoder in TriNet is passed through QGP and CLSPU for phase unwrapping, and the right side decoder imitates the UNet$++$ architecture, which is considered for phase unwrapping in lines similar to that in PhaseNet 2.0. (a) TriNet denoise + QGP MSE: 0.0042. (b) TriNet denoiser + CLSPU MSE: 0.0039. (c) TriNet fringe order predictor (UNet$++$) MSE: 2.613. (d) UNet$++$ as regression framework MSE: 9.842. (e) TriNet MSE: 0.0018.
Fig. 7. Simple 2D Gaussian is considered for analyzing the performance of conventional and deep learning methods. (a) Wrapped phase at 0 dB. (b) Ground truth. (c) QGP. (d) CLSPU. (e) WKF. (f) PhaseNet 2.0. (g) TriNet.
Fig. 8. Performance of the deep learning methods on random test samples 1 and 2 (of Fig. 5) at ${-}10\;{\rm dB}$ SNR. First and third rows show predicted 3D profiles by PhaseNet 2.0, UNet$++$ without and with post-processing and by the proposed TriNet. Second and forth rows show corresponding RMSE maps. Note the differences in the colorbars representing error ranges/depth profiles in radians.
Fig. 9. Variation of RMSE in phase reconstruction for the proposed approach versus other deep learning approaches at various SNRs of the interference fringes.
Fig. 10. Performance of the proposed TriNet and other conventional and deep learning methods on real deformation measurement (shown in radians). (a) Interference pattern. (b) Noisy wrapped phase. (c) QGP. (d) CLSPU. (e) WKF. (f) PhaseNet 2.0. (g) UNett$++$ (as an ablation study). (h) TriNet. | CommonCrawl |
What is the units digit of the product of all the odd positive integers between 10 and 110?
Any odd multiple of 5 will end in a units digit of 5 (even multiples will end in a units digit of 0). Since all the integers we are multiplying are odd and some of them have a factor of 5, the product will be an odd multiple of 5 with a units digit of $\boxed{5}$. | Math Dataset |
\(\newcommand{\dollar}{\$} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\arctanh}{arctanh} \DeclareMathOperator{\arcsec}{arcsec} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \definecolor{fillinmathshade}{gray}{0.9} \newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}} \)
Active Calculus
Matthew Boelkins
IndexPrevUpNext
Active Calculus: Our Goals
Features of the Text
Students! Read this!
Instructors! Read this!
1 Understanding the Derivative
How do we measure velocity?
The notion of limit
The derivative of a function at a point
The derivative function
Interpreting, estimating, and using the derivative
Limits, Continuity, and Differentiability
The Tangent Line Approximation
2 Computing Derivatives
Elementary derivative rules
The sine and cosine functions
The product and quotient rules
Derivatives of other trigonometric functions
The chain rule
Derivatives of Inverse Functions
Derivatives of Functions Given Implicitly
Using Derivatives to Evaluate Limits
3 Using Derivatives
Using derivatives to identify extreme values
Using derivatives to describe families of functions
Global Optimization
Applied Optimization
Related Rates
4 The Definite Integral
Determining distance traveled from velocity
Riemann Sums
The Definite Integral
The Fundamental Theorem of Calculus
5 Evaluating Integrals
Constructing Accurate Graphs of Antiderivatives
The Second Fundamental Theorem of Calculus
Integration by Substitution
Integration by Parts
Other Options for Finding Algebraic Antiderivatives
6 Using Definite Integrals
Using Definite Integrals to Find Area and Length
Using Definite Integrals to Find Volume
Density, Mass, and Center of Mass
Physics Applications: Work, Force, and Pressure
Improper Integrals
7 Differential Equations
An Introduction to Differential Equations
Qualitative behavior of solutions to DEs
Euler's method
Separable differential equations
Modeling with differential equations
Population Growth and the Logistic Equation
8 Sequences and Series
Series of Real Numbers
Alternating Series
Taylor Polynomials and Taylor Series
A A Short Table of Integrals
B Answers to Activities
C Answers to Selected Exercises
FeedbackAuthored in PreTeXt
Section 1.2 The notion of limit
Motivating Questions
What is the mathematical notion of limit and what role do limits play in the study of functions?
What is the meaning of the notation \(\lim_{x \to a} f(x) = L\text{?}\)
How do we go about determining the value of the limit of a function at a point?
How do we manipulate average velocity to compute instantaneous velocity?
In Section 1.1 we used a function, \(s(t)\text{,}\) to model the location of a moving object at a given time. Functions can model other interesting phenomena, such as the rate at which an automobile consumes gasoline at a given velocity, or the reaction of a patient to a given dosage of a drug. We can use calculus to study how a function value changes in response to changes in the input variable.
Think about the falling ball whose position function is given by \(s(t) = 64 - 16t^2\text{.}\) Its average velocity on the interval \([1,x]\) is given by
\begin{equation*} AV_{[1,x]} = \frac{s(x) - s(1)}{x-1} = \frac{(64-16x^2) - (64-16)}{x-1} = \frac{16 - 16x^2}{x-1}\text{.} \end{equation*}
Note that the average velocity is a function of \(x\text{.}\) That is, the function \(g(x) = \frac{16 - 16x^2}{x-1}\) tells us the average velocity of the ball on the interval from \(t = 1\) to \(t = x\text{.}\) To find the instantaneous velocity of the ball when \(t = 1\text{,}\) we need to know what happens to \(g(x)\) as \(x\) gets closer and closer to \(1\text{.}\) But also notice that \(g(1)\) is not defined, because it leads to the quotient \(0/0\text{.}\)
This is where the notion of a limit comes in. By using a limit, we can investigate the behavior of \(g(x)\) as \(x\) gets arbitrarily close, but not equal, to \(1\text{.}\) We first use the graph of a function to explore points where interesting behavior occurs.
Preview Activity 1.2.1.
Suppose that \(g\) is the function given by the graph below. Use the graph in Figure 1.2.1 to answer each of the following questions.
Determine the values \(g(-2)\text{,}\) \(g(-1)\text{,}\) \(g(0)\text{,}\) \(g(1)\text{,}\) and \(g(2)\text{,}\) if defined. If the function value is not defined, explain what feature of the graph tells you this.
For each of the values \(a = -1\text{,}\) \(a = 0\text{,}\) and \(a = 2\text{,}\) complete the following sentence: "As \(x\) gets closer and closer (but not equal) to \(a\text{,}\) \(g(x)\) gets as close as we want to ."
What happens as \(x\) gets closer and closer (but not equal) to \(a = 1\text{?}\) Does the function \(g(x)\) get as close as we would like to a single value?
Figure 1.2.1. Graph of \(y = g(x)\) for Preview Activity 1.2.1.
Subsection 1.2.1 The Notion of Limit
Limits give us a way to identify a trend in the values of a function as its input variable approaches a particular value of interest. We need a precise understanding of what it means to say "a function \(f\) has limit \(L\) as \(x\) approaches \(a\text{.}\)" To begin, think about a recent example.
In Preview Activity 1.2.1, we saw that as \(x\) gets closer and closer (but not equal) to 0, \(g(x)\) gets as close as we want to the value 4. At first, this may feel counterintuitive, because the value of \(g(0)\) is \(1\text{,}\) not \(4\text{.}\) But limits describe the behavior of a function arbitrarily close to a fixed input, and the value of the function at the fixed input does not matter. More formally, 1 we say the following.
Definition 1.2.2.
Given a function \(f\text{,}\) a fixed input \(x = a\text{,}\) and a real number \(L\text{,}\) we say that \(f\) has limit \(L\) as \(x\) approaches \(a\), and write
\begin{equation*} \lim_{x \to a} f(x) = L \end{equation*}
provided that we can make \(f(x)\) as close to \(L\) as we like by taking \(x\) sufficiently close (but not equal) to \(a\text{.}\) If we cannot make \(f(x)\) as close to a single value as we would like as \(x\) approaches \(a\text{,}\) then we say that \(f\) does not have a limit as \(x\) approaches \(a\text{.}\)
Example 1.2.3.
For the function \(g\) pictured in Figure 1.2.1, we make the following observations:
\begin{equation*} \lim_{x \to -1} g(x) = 3, \ \lim_{x \to 0} g(x) = 4, \ \text{and} \ \lim_{x \to 2} g(x) = 1\text{.} \end{equation*}
When working from a graph, it suffices to ask if the function approaches a single value from each side of the fixed input. The function value at the fixed input is irrelevant. This reasoning explains the values of the three limits stated above.
However, \(g\) does not have a limit as \(x \to 1\text{.}\) There is a jump in the graph at \(x = 1\text{.}\) If we approach \(x = 1\) from the left, the function values tend to get close to 3, but if we approach \(x = 1\) from the right, the function values get close to 2. There is no single number that all of these function values approach. This is why the limit of \(g\) does not exist at \(x = 1\text{.}\)
For any function \(f\text{,}\) there are typically three ways to answer the question "does \(f\) have a limit at \(x = a\text{,}\) and if so, what is the limit?" The first is to reason graphically as we have just done with the example from Preview Activity 1.2.1. If we have a formula for \(f(x)\text{,}\) there are two additional possibilities:
Evaluate the function at a sequence of inputs that approach \(a\) on either side (typically using some sort of computing technology), and ask if the sequence of outputs seems to approach a single value.
Use the algebraic form of the function to understand the trend in its output values as the input values approach \(a\text{.}\)
The first approach produces only an approximation of the value of the limit, while the latter can often be used to determine the limit exactly.
Example 1.2.4. Limits of Two Functions.
For each of the following functions, we'd like to know whether or not the function has a limit at the stated \(a\)-values. Use both numerical and algebraic approaches to investigate and, if possible, estimate or determine the value of the limit. Compare the results with a careful graph of the function on an interval containing the points of interest.
\(f(x) = \frac{4-x^2}{x+2}\text{;}\) \(a = -1\text{,}\) \(a = -2\)
\(g(x) = \sin\left(\frac{\pi}{x}\right)\text{;}\) \(a = 3\text{,}\) \(a = 0\)
a. We first construct a graph of \(f\) along with tables of values near \(a = -1\) and \(a = -2\text{.}\)
Table 1.2.5. Table of \(f\) values near \(x=-1\text{.}\)
\(x\) \(f(x)\)
\(-0.9\) \(2.9\)
\(-0.99\) \(2.99\)
\(-0.999\) \(2.999\)
\(-0.9999\) \(2.9999\)
Figure 1.2.7. Plot of \(f(x)\) on \([-4,2]\text{.}\)
From Table 1.2.5, it appears that we can make \(f\) as close as we want to 3 by taking \(x\) sufficiently close to \(-1\text{,}\) which suggests that \(\lim_{x \to -1} f(x) = 3\text{.}\) This is also consistent with the graph of \(f\text{.}\) To see this a bit more rigorously and from an algebraic point of view, consider the formula for \(f\text{:}\) \(f(x) = \frac{4-x^2}{x+2}\text{.}\) As \(x \to -1\text{,}\) \((4-x^2) \to (4 - (-1)^2) = 3\text{,}\) and \((x+2) \to (-1 + 2) = 1\text{,}\) so as \(x \to -1\text{,}\) the numerator of \(f\) tends to 3 and the denominator tends to 1, hence \(\lim_{x \to -1} f(x) = \frac{3}{1} = 3\text{.}\)
The situation is more complicated when \(x \to -2\text{,}\) because \(f(-2)\) is not defined. If we try to use a similar algebraic argument regarding the numerator and denominator, we observe that as \(x \to -2\text{,}\) \((4-x^2) \to (4 - (-2)^2) = 0\text{,}\) and \((x+2) \to (-2 + 2) = 0\text{,}\) so as \(x \to -2\text{,}\) the numerator and denominator of \(f\) both tend to 0. We call \(0/0\) an indeterminate form. This tells us that there is somehow more work to do. From Table 1.2.6 and Figure 1.2.7, it appears that \(f\) should have a limit of \(4\) at \(x = -2\text{.}\)
To see algebraically why this is the case, observe that
\begin{align*} \lim_{x \to -2} f(x) = \amp \lim_{x \to -2} \frac{4-x^2}{x+2}\\ = \amp \lim_{x \to -2} \frac{(2-x)(2+x)}{x+2}\text{.} \end{align*}
It is important to observe that, since we are taking the limit as \(x \to -2\text{,}\) we are considering \(x\) values that are close, but not equal, to \(-2\text{.}\) Because we never actually allow \(x\) to equal \(-2\text{,}\) the quotient \(\frac{2+x}{x+2}\) has value 1 for every possible value of \(x\text{.}\) Thus, we can simplify the most recent expression above, and find that
\begin{equation*} \lim_{x \to -2} f(x) = \lim_{x \to -2} 2-x\text{.} \end{equation*}
This limit is now easy to determine, and its value clearly is \(4\text{.}\) Thus, from several points of view we've seen that \(\lim_{x \to -2} f(x) = 4\text{.}\)
b. Next we turn to the function \(g\text{,}\) and construct two tables and a graph.
Table 1.2.8. Table of \(g\) values near \(x=3\text{.}\)
\(x\) \(g(x)\)
\(2.9\) \(0.88351\)
\(2.99\) \(0.86777\)
\(2.999\) \(0.86620\)
\(2.9999\) \(0.86604\)
\(-0.1\) \(0\)
\(-0.01\) \(0\)
\(-0.001\) \(0\)
\(-0.0001\) \(0\)
\(0.1\) \(0\)
\(0.01\) \(0\)
\(0.001\) \(0\)
\(0.0001\) \(0\)
Figure 1.2.10. Plot of \(g(x)\) on \([-4,4]\text{.}\)
First, as \(x \to 3\text{,}\) it appears from the table values that the function is approaching a number between \(0.86601\) and \(0.86604\text{.}\) From the graph it appears that \(g(x) \to g(3)\) as \(x \to 3\text{.}\) The exact value of \(g(3) = \sin(\frac{\pi}{3})\) is \(\frac{\sqrt{3}}{2}\text{,}\) which is approximately 0.8660254038. This is convincing evidence that
\begin{equation*} \lim_{x \to 3} g(x) = \frac{\sqrt{3}}{2}\text{.} \end{equation*}
As \(x \to 0\text{,}\) we observe that \(\frac{\pi}{x}\) does not behave in an elementary way. When \(x\) is positive and approaching zero, we are dividing by smaller and smaller positive values, and \(\frac{\pi}{x}\) increases without bound. When \(x\) is negative and approaching zero, \(\frac{\pi}{x}\) decreases without bound. In this sense, as we get close to \(x = 0\text{,}\) the inputs to the sine function are growing rapidly, and this leads to increasingly rapid oscillations in the graph of \(g\) betweem \(1\) and \(-1\text{.}\) If we plot the function \(g(x) = \sin\left(\frac{\pi}{x}\right)\) with a graphing utility and then zoom in on \(x = 0\text{,}\) we see that the function never settles down to a single value near the origin, which suggests that \(g\) does not have a limit at \(x = 0\text{.}\)
How do we reconcile the graph with the righthand table above, which seems to suggest that the limit of \(g\) as \(x\) approaches \(0\) may in fact be \(0\text{?}\) The data misleads us because of the special nature of the sequence of input values \(\{0.1, 0.01, 0.001, \ldots\}\text{.}\) When we evaluate \(g(10^{-k})\text{,}\) we get \(g(10^{-k}) = \sin\left(\frac{\pi}{10^{-k}}\right) = \sin(10^k \pi) = 0\) for each positive integer value of \(k\text{.}\) But if we take a different sequence of values approaching zero, say \(\{0.3, 0.03, 0.003, \ldots\}\text{,}\) then we find that
\begin{equation*} g(3 \cdot 10^{-k}) = \sin\left(\frac{\pi}{3 \cdot 10^{-k}}\right) = \sin\left(\frac{10^k \pi}{3}\right) = \frac{\sqrt{3}}{2} \approx 0.866025\text{.} \end{equation*}
That sequence of function values suggests that the value of the limit is \(\frac{\sqrt{3}}{2}\text{.}\) Clearly the function cannot have two different values for the limit, so \(g\) has no limit as \(x \to 0\text{.}\)
An important lesson to take from Example 1.2.4 is that tables can be misleading when determining the value of a limit. While a table of values is useful for investigating the possible value of a limit, we should also use other tools to confirm the value.
Activity 1.2.2.
Estimate the value of each of the following limits by constructing appropriate tables of values. Then determine the exact value of the limit by using algebra to simplify the function. Finally, plot each function on an appropriate interval to check your result visually.
\(\displaystyle \lim_{x \to 1} \frac{x^2 - 1}{x-1}\)
\(\displaystyle \lim_{x \to 0} \frac{(2+x)^3 - 8}{x}\)
\(\displaystyle \lim_{x \to 0} \frac{\sqrt{x+1} - 1}{x}\)
Recall that our primary motivation for considering limits of functions comes from our interest in studying the rate of change of a function. To that end, we close this section by revisiting our previous work with average and instantaneous velocity and highlighting the role that limits play.
Subsection 1.2.2 Instantaneous Velocity
Suppose that we have a moving object whose position at time \(t\) is given by a function \(s\text{.}\) We know that the average velocity of the object on the time interval \([a,b]\) is \(AV_{[a,b]} = \frac{s(b)-s(a)}{b-a}\text{.}\) We define the instantaneous velocity at \(a\) to be the limit of average velocity as \(b\) approaches \(a\text{.}\) Note particularly that as \(b \to a\text{,}\) the length of the time interval gets shorter and shorter (while always including \(a\)). We will write \(IV_{t=a}\) for the instantaneous velocity at \(t = a\text{,}\) and thus
\begin{equation*} IV_{t=a} = \lim_{b \to a} AV_{[a,b]} = \lim_{b \to a} \frac{s(b)-s(a)}{b-a}\text{.} \end{equation*}
Equivalently, if we think of the changing value \(b\) as being of the form \(b = a + h\text{,}\) where \(h\) is some small number, then we may instead write
\begin{equation*} IV_{t=a} = \lim_{h \to 0} AV_{[a,a+h]} = \lim_{h \to 0} \frac{s(a+h)-s(a)}{h}\text{.} \end{equation*}
Again, the most important idea here is that to compute instantaneous velocity, we take a limit of average velocities as the time interval shrinks.
Consider a moving object whose position function is given by \(s(t) = t^2\text{,}\) where \(s\) is measured in meters and \(t\) is measured in minutes.
Determine the most simplified expression for the average velocity of the object on the interval \([3, 3+h]\text{,}\) where \(h \gt 0\text{.}\)
Determine the average velocity of the object on the interval \([3,3.2]\text{.}\) Include units on your answer.
Determine the instantaneous velocity of the object when \(t = 3\text{.}\) Include units on your answer.
The closing activity of this section asks you to make some connections among average velocity, instantaneous velocity, and slopes of certain lines.
For the moving object whose position \(s\) at time \(t\) is given by the graph in Figure 1.2.11, answer each of the following questions. Assume that \(s\) is measured in feet and \(t\) is measured in seconds.
Figure 1.2.11. Plot of the position function \(y = s(t)\) in Activity 1.2.4.
Use the graph to estimate the average velocity of the object on each of the following intervals: \([0.5,1]\text{,}\) \([1.5,2.5]\text{,}\) \([0,5]\text{.}\) Draw each line whose slope represents the average velocity you seek.
How could you use average velocities or slopes of lines to estimate the instantaneous velocity of the object at a fixed time?
Use the graph to estimate the instantaneous velocity of the object when \(t = 2\text{.}\) Should this instantaneous velocity at \(t = 2\) be greater or less than the average velocity on \([1.5,2.5]\) that you computed in (a)? Why?
Subsection 1.2.3 Summary
Limits enable us to examine trends in function behavior near a specific point. In particular, taking a limit at a given point asks if the function values nearby tend to approach a particular fixed value.
We read \(\lim_{x \to a} f(x) = L\text{,}\) as "the limit of \(f\) as \(x\) approaches \(a\) is \(L\text{,}\)" which means that we can make the value of \(f(x)\) as close to \(L\) as we want by taking \(x\) sufficiently close (but not equal) to \(a\text{.}\)
To find \(\lim_{x \to a} f(x)\) for a given value of \(a\) and a known function \(f\text{,}\) we can estimate this value from the graph of \(f\text{,}\) or we can make a table of function values for \(x\)-values that are closer and closer to \(a\text{.}\) If we want the exact value of the limit, we can work with the function algebraically to understand how different parts of the formula for \(f\) change as \(x \to a\text{.}\)
We find the instantaneous velocity of a moving object at a fixed time by taking the limit of average velocities of the object over shorter and shorter time intervals containing the time of interest.
Exercises 1.2.4 Exercises
1. Limits on a piecewise graph.
Use the figure below, which gives a graph of the function \(f(x)\text{,}\) to give values for the indicated limits. If a limit does not exist, enter none.
(a) \(\lim\limits_{x \rightarrow -1} f(x)\) =
(b) \(\lim\limits_{x \rightarrow 0} f(x)\) =
(c) \(\lim\limits_{x \rightarrow 1} f(x)\) =
(d) \(\lim\limits_{x \rightarrow 4} f(x)\) =
2. Estimating a limit numerically.
Use a graph to estimate the limit
\begin{equation*} \lim_{\theta \rightarrow 0} \frac{\sin(6\theta)}{\theta}. \end{equation*}
Note: \(\theta\) is measured in radians. All angles will be in radians in this class unless otherwise specified.
\(\lim\limits_{\theta \rightarrow 0} \frac{\sin(6\theta)}{\theta} =\)
3. Limits for a piecewise formula.
For the function
\begin{equation*} f(x) = \begin{cases} x^2 - 4, \amp 0\le x \lt 4\\ 4, \amp x = 4\\ 3 x + 0, \amp 4 \lt x \end{cases} \end{equation*}
use algebra to find each of the following limits:
\(\lim\limits_{x\to 4^{+}}\, f(x) =\)
\(\lim\limits_{x\to 4^{-}}\, f(x) =\)
\(\lim\limits_{x\to 4}\, f(x) =\)
(For each, enter DNE if the limit does not exist.)
Sketch a graph of \(f(x)\) to confirm your answers.
4. Evaluating a limit algebraically.
Evaluate the limit
\begin{equation*} \lim_{ x \rightarrow -7 } \frac{x^2 - 49}{x + 7} \end{equation*}
If the limit does not exist enter DNE.
Limit =
Consider the function whose formula is \(f(x) = \frac{16-x^4}{x^2-4}\text{.}\)
What is the domain of \(f\text{?}\)
Use a sequence of values of \(x\) near \(a = 2\) to estimate the value of \(\lim_{x \to 2} f(x)\text{,}\) if you think the limit exists. If you think the limit doesn't exist, explain why.
Use algebra to simplify the expression \(\frac{16-x^4}{x^2-4}\) and hence work to evaluate \(\lim_{x \to 2} f(x)\) exactly, if it exists, or to explain how your work shows the limit fails to exist. Discuss how your findings compare to your results in (b).
True or false: \(f(2) = -8\text{.}\) Why?
True or false: \(\frac{16-x^4}{x^2-4} = -4-x^2\text{.}\) Why? How is this equality connected to your work above with the function \(f\text{?}\)
Based on all of your work above, construct an accurate, labeled graph of \(y = f(x)\) on the interval \([1,3]\text{,}\) and write a sentence that explains what you now know about \(\lim_{x \to 2} \frac{16-x^4}{x^2-4}\text{.}\)
Let \(g(x) = -\frac{|x+3|}{x+3}\text{.}\)
What is the domain of \(g\text{?}\)
Use a sequence of values near \(a = -3\) to estimate the value of \(\lim_{x \to -3} g(x)\text{,}\) if you think the limit exists. If you think the limit doesn't exist, explain why.
Use algebra to simplify the expression \(\frac{|x+3|}{x+3}\) and hence work to evaluate \(\lim_{x \to -3} g(x)\) exactly, if it exists, or to explain how your work shows the limit fails to exist. Discuss how your findings compare to your results in (b). (Hint: \(|a| = a\) whenever \(a \ge 0\text{,}\) but \(|a| = -a\) whenever \(a \lt 0\text{.}\))
True or false: \(g(-3) = -1\text{.}\) Why?
True or false: \(-\frac{|x+3|}{x+3} = -1\text{.}\) Why? How is this equality connected to your work above with the function \(g\text{?}\)
Based on all of your work above, construct an accurate, labeled graph of \(y = g(x)\) on the interval \([-4,-2]\text{,}\) and write a sentence that explains what you now know about \(\lim_{x \to -3} g(x)\text{.}\)
For each of the following prompts, sketch a graph on the provided axes of a function that has the stated properties.
Figure 1.2.12. Axes for plotting \(y = f(x)\) in (a) and \(y = g(x)\) in (b).
\(y = f(x)\) such that
\(f(-2) = 2\) and \(\lim_{x \to -2} f(x) = 1\)
\(f(1)\) is not defined and \(\lim_{x \to 1} f(x) = 0\)
\(f(2) = 1\) and \(\lim_{x \to 2} f(x)\) does not exist.
\(y = g(x)\) such that
\(g(-2) = 3\text{,}\) \(g(-1) = -1\text{,}\) \(g(1) = -2\text{,}\) and \(g(2) = 3\)
At \(x = -2, -1, 1\) and \(2\text{,}\) \(g\) has a limit, and its limit equals the value of the function at that point.
\(g(0)\) is not defined and \(\lim_{x \to 0} g(x)\) does not exist.
A bungee jumper dives from a tower at time \(t=0\text{.}\) Her height \(s\) in feet at time \(t\) in seconds is given by \(s(t) = 100\cos(0.75t) \cdot e^{-0.2t}+100\text{.}\)
Write an expression for the average velocity of the bungee jumper on the interval \([1,1+h]\text{.}\)
Use computing technology to estimate the value of the limit as \(h \to 0\) of the quantity you found in (a).
What is the meaning of the value of the limit in (b)? What are its units?
What follows here is not what mathematicians consider the formal definition of a limit. To be completely precise, it is necessary to quantify both what it means to say "as close to \(L\) as we like" and "sufficiently close to \(a\text{.}\)" That can be accomplished through what is traditionally called the epsilon-delta definition of limits. The definition presented here is sufficient for the purposes of this text. | CommonCrawl |
Why is push_back in C++ vectors constant amortized?
I am learning C++ and noticed that the running time for the push_back function for vectors is constant "amortized." The documentation further notes that "If a reallocation happens, the reallocation is itself up to linear in the entire size."
Shouldn't this mean the push_back function is $O(n)$, where $n$ is the length of the vector? After all, we are interested in worst case analysis, right?
I guess, crucially, I don't understand how the adjective "amortized" changes the running time.
algorithms time-complexity amortized-analysis
David FauxDavid Faux
$\begingroup$ With a RAM machine, allocating $n$ bytes of memory is not an $O(n)$ operation -- it is considered pretty much constant time. $\endgroup$
– usul
$\begingroup$ The word "amortised" clearly indicates that we are not asking for the worst case or average case of one pushback operation but the amortised case of performing many pushback operations. $\endgroup$
– gnasher729
The important word here is "amortized". Amortized analysis is an analysis technique that examines a sequence of $n$ operations. If the whole sequence runs in $T(n)$ time, then each operation in the sequence runs in $T(n)/n$. The idea is that while a few operations in the sequence might be costly, they can't happen often enough to weigh down the program. It's important to note that this is different from average case analysis over some input distribution or randomized analysis. Amortized analysis established a worst case bound for the performance of an algorithm irrespective of the inputs. It's most commonly used to analyse data structures, which have a persistent state throughout the program.
One of the most common examples given is the analysis of a stack with a multipop operations that pops $k$ elements. A naive analysis of multipop would say that in the worst case multipop must take $O(n)$ time since it might have to pop off all the elements of the stack. However, if you look at a sequence of operations, you'll notice that the number of pops can not exceed the number of pushes. Thus over any sequence of $n$ operations the number of pops can't exceed $O(n)$, and so multipop runs in $O(1)$ amortized time even though occasionally a single call might take more time.
Now how does this relate to C++ vectors? Vectors are implemented with arrays so to increase the size of a vector you must reallocate memory and copy the whole array over. Obviously we wouldn't want to do this very often. So if you perform a push_back operation and the vector needs to allocate more space, it will increase the size by a factor $m$. Now this takes more memory, which you may not use in full, but the next few push_back operations all run in constant time.
Now if we do the amortized analysis of the push_back operation (which I found here) we'll find that it runs in constant amortized time. Suppose you have $n$ items and your multiplication factor is $m$. Then the number of relocations is roughly $\log_m(n)$. The $i$th reallocation will cost proportional to $m^i$, about the size of the current array. Thus the total time for $n$ push back is $\sum_{i=1}^{\log_m(n)}m^i \approx \frac{nm}{m-1}$, since it's a geometric series. Divide this by $n$ operations and we get that each operation takes $\frac{m}{m-1}$, a constant. Lastly you have to be careful about choosing your factor $m$. If it's too close to $1$ then this constant gets too large for practical applications, but if $m$ is too large, say 2, then you start wasting a lot of memory. The ideal growth rate varies by application, but I think some implementations use $1.5$.
Marc KhouryMarc Khoury
Although @Marc has given (what I think is) an excellent analysis, some people might prefer to consider things from a slightly different angle.
One is to consider a slightly different way of doing a reallocation. Instead of copying all the elements from the old storage to the new storage immediately, consider copying only one element at a time -- i.e., each time you do a push_back, it adds the new element to the new space, and copies exactly one existing element from the old space to the new space. Assuming a growth factor of 2, it's pretty obvious that when the new space is full, we'd have finished copying all the elements from the old space to the new space, and each push_back have been exactly constant time. At that point, we'd discard the old space, allocate a new block of memory that was twice as large again, and repeat the process.
Pretty clearly, we can continue this indefinitely (or as long as there's memory available, anyway) and every push_back would involve adding one new element and copying one old element.
A typical implementation still has exactly the same number of copies -- but instead of doing the copies one at a time, it copies all the existing elements at once. On one hand, you're right: that does mean that if you look at individual invocations of push_back, some of them will be substantially slower than others. If we look at a long term average, however, the amount of copying done per invocation of push_back remains constant, regardless of the size of the vector.
Although it's irrelevant to the computational complexity, I think it's worth pointing out why it's advantageous to do things as they do, instead of copying one element per push_back, so the time per push_back remains constant. There are are least three reasons to consider.
The first is simply memory availability. The old memory can be freed for other uses only after the copying is finished. If you only copied one item at a time, the old block of memory would remain allocated much longer. In fact, you'd have one old block and one new block allocated essentially all the time. If you decided on a growth factor smaller than two (which you usually want) you'd need even more memory allocated all the time.
Second, if you only copied one old element at a time, indexing into the array would be a little more tricky -- each indexing operation would need to figure out whether the element at the given index was currently in the old block of memory or the new one. That's not terribly complex by any means, but for an elementary operation like indexing into an array, almost any slow-down could be significant.
Third, by copying all at once, you take much better advantage of caching. Copying all at once, you can expect both the source and destination to be in the cache in most cases, so the cost of a cache miss is amortized over the number of elements that will fit in a cache line. If you copy one element at a time, you might easily have a cache miss for every element you copy. That only changes the constant factor, not the complexity, but it can still be fairly significant -- for a typical machine, you could easily expect a factor of 10 to 20.
It's probably also worth considering the other direction for a moment: if you were designing a system with real-time requirements, it might well make sense to copy only one element at a time instead of all at once. Although overall speed might (or might not) be lower, you'd still have a hard upper bound on the time taken for a single execution of push_back -- presuming you had a real-time allocator (though of course, many real-time systems simply prohibit dynamic allocation of memory at all, at least in portions with real-time requirements).
Jerry CoffinJerry Coffin
$\begingroup$ +1 This is a wonderful Feynman-style explanation. $\endgroup$
– Kuba hasn't forgotten Monica
Not the answer you're looking for? Browse other questions tagged algorithms time-complexity amortized-analysis or ask your own question.
how to verify permutation generated in constant amortized time?
Can element uniqueness be solved in deterministic linear time?
I can not see why MSD radix sort is theoretically as efficient as LSD radix sort
Why do we need "potential" for amortized analysis?
Does the following algorithm has amortized constant time per element?
Properties of roots of recurrence relations in the context of exponential algorithms in order to decrease the upper bound of the running time
Ω(f(x)) and worst case analysis
Does this data structure already exist? | CommonCrawl |
BellInequalityMaxQubits: Approximates the optimal value of a Bell inequality in qubit (i.e., 2-dimensional quantum) settings.
NonlocalGameValue: Computes the maximum value of a nonlocal game in a classical, quantum, or no-signalling setting.
BellInequalityMax: Bug fix when computing the classical value of a Bell inequality using measurements that have values other than $0, 1, 2, \ldots, d-1$.
KrausOperators: If the zero map is provided as input, this function now returns a single zero matrix Kraus operator, rather than an empty cell containing no Kraus operators.
XORGameValue: Bug fix when computing the value of some XOR games with complex entries.
This page was last modified on 13 April 2015, at 18:41. | CommonCrawl |
# 1. Fundamentals of Network Analysis
A **node** is a fundamental unit of a network. It represents an entity or an individual in the system. For example, in a social network, nodes can represent people, while in a computer network, nodes can represent computers or devices.
An **edge** represents a connection or a relationship between two nodes. It can be directed or undirected, depending on whether the relationship has a specific direction. For example, in a social network, an edge can represent a friendship between two people, while in a transportation network, an edge can represent a road connecting two cities.
A **network** is a collection of nodes and edges. It can be represented as a graph, where nodes are represented by vertices and edges are represented by lines connecting the vertices. Networks can be used to model a wide range of systems, such as social networks, biological networks, and transportation networks.
Network analysis involves studying the properties and behaviors of networks. This includes analyzing the connectivity patterns, identifying important nodes or clusters, and understanding how information or influence spreads through the network.
In the following sections, we will explore different types of networks, network measures and metrics, and various models of network diffusion. We will also discuss real-world examples and applications of network analysis. Let's dive in!
# 1.1. Basic Concepts and Terminology
**Degree** is one of the most fundamental measures in network analysis. It refers to the number of edges connected to a node. In a social network, the degree of a person represents the number of friends they have. In a transportation network, the degree of a city represents the number of roads connecting to it.
**Path** refers to a sequence of nodes connected by edges. A path can be used to describe the route between two nodes in a network. The length of a path is the number of edges it contains.
**Connectedness** is a property of a network that describes how easily nodes can reach each other through paths. A network is considered connected if there is a path between any pair of nodes. If a network is not connected, it can be divided into multiple components, where each component is a connected subnetwork.
**Centrality** measures the importance or influence of a node in a network. There are different types of centrality measures, such as degree centrality, betweenness centrality, and closeness centrality. Degree centrality measures how well connected a node is to other nodes. Betweenness centrality measures how often a node lies on the shortest path between other nodes. Closeness centrality measures how close a node is to all other nodes in terms of shortest path length.
**Clustering coefficient** measures the extent to which nodes in a network tend to cluster together. It quantifies the likelihood that two nodes that are connected to a common neighbor are also connected to each other. A high clustering coefficient indicates a high level of local connectivity in the network.
# 1.2. Types of Networks
**Directed networks**, also known as **digraphs**, are networks where edges have a specific direction. In a directed network, an edge from node A to node B represents a relationship or influence from A to B, but not necessarily from B to A. Directed networks are used to model systems where the direction of relationships or flows is important, such as information flow in social networks or traffic flow in transportation networks.
**Undirected networks** are networks where edges do not have a specific direction. In an undirected network, an edge between node A and node B represents a symmetric relationship or interaction between A and B. Undirected networks are used to model systems where the direction of relationships is not important, such as friendship networks or co-authorship networks.
**Weighted networks** are networks where edges have weights or values associated with them. These weights can represent the strength, intensity, or frequency of a relationship between nodes. Weighted networks are used to model systems where the strength of relationships is important, such as communication networks or collaboration networks.
**Sparse networks** are networks where only a small fraction of all possible edges are present. Sparse networks are common in real-world systems, where connections between nodes are often limited or constrained. Sparse networks can be more challenging to analyze compared to dense networks, where most or all possible edges are present.
**Dense networks** are networks where a large fraction of all possible edges are present. Dense networks are common in systems where nodes are highly connected or interact with each other frequently. Dense networks can exhibit different properties and behaviors compared to sparse networks, and may require different analysis techniques.
# 1.3. Network Measures and Metrics
**Degree centrality** measures the number of connections a node has in a network. It is calculated by counting the number of edges connected to a node. Nodes with high degree centrality are often considered important or influential in a network.
**Betweenness centrality** measures the extent to which a node lies on the shortest paths between other nodes in a network. It quantifies the control or influence a node has over the flow of information or resources in a network. Nodes with high betweenness centrality often act as bridges or connectors between different parts of a network.
**Closeness centrality** measures how close a node is to all other nodes in a network in terms of shortest path length. It quantifies the efficiency of information or resource flow in a network. Nodes with high closeness centrality are often able to quickly disseminate information or resources to other nodes in a network.
**Eigenvector centrality** measures the influence of a node in a network based on the influence of its neighbors. It assigns higher centrality scores to nodes that are connected to other highly central nodes. Eigenvector centrality is often used to identify nodes with indirect influence or control over a network.
**Clustering coefficient** measures the extent to which nodes in a network tend to cluster together. It quantifies the likelihood that two nodes that are connected to a common neighbor are also connected to each other. A high clustering coefficient indicates a high level of local connectivity in the network.
**Assortativity** measures the tendency of nodes to connect to other nodes with similar characteristics. It quantifies the degree of homophily or heterophily in a network. Assortativity can be measured based on various node attributes, such as degree, age, or gender.
# 2. Diffusion Models
**Linear Threshold Model** is a popular model of network diffusion. It assumes that nodes have a threshold for adopting a behavior or spreading information. When the number of neighbors who have already adopted the behavior or information exceeds the threshold, a node adopts the behavior or information. The Linear Threshold Model is often used to study the spread of innovations or behaviors in social networks.
**Independent Cascade Model** is another widely used model of network diffusion. It assumes that nodes can independently influence their neighbors to adopt a behavior or spread information. Each edge has a probability of transmitting the behavior or information to its neighbor. The Independent Cascade Model is often used to study the spread of diseases or rumors in networks.
**Bass Diffusion Model** is a classic model of network diffusion. It assumes that the adoption of a behavior or spread of information is influenced by two factors: innovation and imitation. Innovation represents the initial adoption of the behavior or information by a few nodes, while imitation represents the spread of the behavior or information through social influence. The Bass Diffusion Model is often used to study the adoption of new products or technologies.
# 2.1. Linear Threshold Model
The Linear Threshold Model is a popular model of network diffusion. It assumes that nodes have a threshold for adopting a behavior or spreading information. When the number of neighbors who have already adopted the behavior or information exceeds the threshold, a node adopts the behavior or information.
The Linear Threshold Model can be represented as a directed or undirected network. Each node has a threshold value between 0 and 1, which represents the proportion of neighbors that need to adopt the behavior or information for the node to adopt.
The diffusion process starts with a set of seed nodes that have already adopted the behavior or information. At each time step, nodes evaluate the behavior or information of their neighbors and update their own state accordingly. If the proportion of neighbors who have adopted exceeds the threshold, a node adopts the behavior or information.
The diffusion process continues until no more nodes can adopt the behavior or information. The final state of the network represents the diffusion outcome, where some nodes have adopted and others have not.
The Linear Threshold Model is often used to study the spread of innovations, behaviors, or information in social networks. It can help researchers understand how different factors, such as network structure, node attributes, or initial conditions, influence the spread of behaviors or information.
Consider a social network where nodes represent individuals and edges represent friendships. Each node has a threshold value between 0 and 1, which represents the proportion of friends that need to adopt a behavior for the node to adopt. The diffusion process starts with a set of seed nodes that have already adopted the behavior. At each time step, nodes evaluate the behavior of their friends and update their own state accordingly. If the proportion of friends who have adopted exceeds the threshold, a node adopts the behavior.
## Exercise
Consider a social network with the following nodes and edges:
Nodes: A, B, C, D, E
Edges: (A, B), (A, C), (B, C), (C, D), (D, E)
Each node has a threshold value between 0 and 1:
Thresholds: A = 0.5, B = 0.6, C = 0.4, D = 0.7, E = 0.3
Assume that nodes A and B have already adopted a behavior. Use the Linear Threshold Model to simulate the diffusion process and determine the final state of the network.
### Solution
At time step 1, node C evaluates the behavior of its neighbors A and B. Since both A and B have adopted the behavior, the proportion of neighbors who have adopted exceeds C's threshold of 0.4. Therefore, node C adopts the behavior.
At time step 2, node D evaluates the behavior of its neighbor C. Since C has adopted the behavior, the proportion of neighbors who have adopted exceeds D's threshold of 0.7. Therefore, node D adopts the behavior.
At time step 3, node E evaluates the behavior of its neighbor D. Since D has adopted the behavior, the proportion of neighbors who have adopted exceeds E's threshold of 0.3. Therefore, node E adopts the behavior.
The final state of the network is:
Nodes: A, B, C, D, E
Adopted: Yes, Yes, Yes, Yes, Yes
# 2.2. Independent Cascade Model
The Independent Cascade Model is another popular model of network diffusion. It assumes that each edge in the network has a probability of transmitting a behavior or information from one node to another. When a node adopts a behavior or information, it can influence its neighbors with a certain probability.
The Independent Cascade Model can be represented as a directed or undirected network. Each edge has a probability value between 0 and 1, which represents the likelihood of transmitting the behavior or information from the source node to the target node.
The diffusion process starts with a set of seed nodes that have already adopted the behavior or information. At each time step, nodes that have adopted the behavior or information can influence their neighbors with a certain probability. If a node is influenced by multiple neighbors, the probabilities are combined using a multiplication rule.
The diffusion process continues until no more nodes can be influenced. The final state of the network represents the diffusion outcome, where some nodes have adopted and others have not.
The Independent Cascade Model is often used to study the spread of viral marketing, information cascades, or rumors in social networks. It can help researchers understand how different factors, such as network structure, edge probabilities, or initial conditions, influence the spread of behaviors or information.
Consider a social network where nodes represent individuals and edges represent influence relationships. Each edge has a probability value between 0 and 1, which represents the likelihood of influencing the target node. The diffusion process starts with a set of seed nodes that have already adopted a behavior. At each time step, nodes that have adopted the behavior can influence their neighbors with a certain probability. If a node is influenced by multiple neighbors, the probabilities are combined using a multiplication rule.
## Exercise
Consider a social network with the following nodes and edges:
Nodes: A, B, C, D, E
Edges: (A, B), (A, C), (B, C), (C, D), (D, E)
Each edge has a probability value between 0 and 1:
Probabilities: (A, B) = 0.5, (A, C) = 0.3, (B, C) = 0.7, (C, D) = 0.4, (D, E) = 0.6
Assume that nodes A and B have already adopted a behavior. Use the Independent Cascade Model to simulate the diffusion process and determine the final state of the network.
### Solution
At time step 1, node A influences node B with a probability of 0.5. Therefore, node B adopts the behavior.
At time step 2, node A influences node C with a probability of 0.3. Therefore, node C adopts the behavior.
At time step 3, node B influences node C with a probability of 0.7. Therefore, node C remains in the adopted state.
At time step 4, node C influences node D with a probability of 0.4. Therefore, node D adopts the behavior.
At time step 5, node D influences node E with a probability of 0.6. Therefore, node E adopts the behavior.
The final state of the network is:
Nodes: A, B, C, D, E
Adopted: Yes, Yes, Yes, Yes, Yes
# 2.3. Bass Diffusion Model
The Bass Diffusion Model is a widely used model in marketing and innovation research to study the adoption of new products or technologies in a population. It was developed by Frank Bass in 1969 and has been applied to various industries and contexts.
The Bass Diffusion Model assumes that the adoption of a new product or technology follows an S-shaped curve over time. The curve represents the cumulative number of adopters as a function of time, starting from an initial set of adopters.
The model is based on two parameters: the coefficient of innovation (p) and the coefficient of imitation (q). The coefficient of innovation represents the rate at which new adopters are added to the population, while the coefficient of imitation represents the rate at which potential adopters are influenced by existing adopters.
The Bass Diffusion Model can be represented mathematically as:
$$
N(t) = m \left(1 - \left(1 - \frac{p}{m}\right)^t\right) + qN(t-1)
$$
where:
- N(t) is the cumulative number of adopters at time t
- m is the total potential market size
- p is the coefficient of innovation
- q is the coefficient of imitation
Let's consider the adoption of a new smartphone model in a population of 100,000 potential customers. The coefficient of innovation (p) is 0.05, and the coefficient of imitation (q) is 0.2. We want to predict the cumulative number of adopters at different time points.
Using the Bass Diffusion Model, we can calculate the cumulative number of adopters at time t as:
$$
N(t) = 100,000 \left(1 - \left(1 - \frac{0.05}{100,000}\right)^t\right) + 0.2N(t-1)
$$
## Exercise
Using the Bass Diffusion Model, calculate the cumulative number of adopters at time t for the smartphone adoption scenario described above. Assume that the initial number of adopters (N(0)) is 1,000.
### Solution
At time t = 1:
$$
N(1) = 100,000 \left(1 - \left(1 - \frac{0.05}{100,000}\right)^1\right) + 0.2(1,000) = 1,050
$$
At time t = 2:
$$
N(2) = 100,000 \left(1 - \left(1 - \frac{0.05}{100,000}\right)^2\right) + 0.2(1,050) = 2,098.05
$$
At time t = 3:
$$
N(3) = 100,000 \left(1 - \left(1 - \frac{0.05}{100,000}\right)^3\right) + 0.2(2,098.05) = 3,147.15
$$
And so on...
# 2.4. Comparison and Applications of Models
There are several diffusion models that have been developed to study the spread of information, innovations, and behaviors in networks. Each model has its own assumptions and parameters, and is suited for different types of diffusion processes.
The Linear Threshold Model, Independent Cascade Model, and Bass Diffusion Model are three commonly used diffusion models. Each model captures different aspects of the diffusion process and can be applied to different contexts.
The Linear Threshold Model is based on the idea that individuals have a threshold for adopting a new behavior or innovation. When the number of neighbors who have already adopted the behavior exceeds the individual's threshold, they will adopt the behavior as well. This model is useful for studying the diffusion of behaviors that require social influence or peer pressure.
The Independent Cascade Model is based on the idea that the spread of information or behaviors happens through a series of independent events. Each node in the network has a certain probability of adopting the behavior or passing on the information to its neighbors. This model is useful for studying the spread of information in social networks.
The Bass Diffusion Model is based on the idea that the adoption of a new product or technology follows an S-shaped curve over time. It takes into account both the rate of innovation and the rate of imitation in the population. This model is useful for studying the adoption of new products or technologies in a market.
Each diffusion model has its own strengths and limitations, and the choice of model depends on the specific research question and context. Researchers often compare and evaluate different models to determine which one best fits their data and provides the most accurate predictions.
Let's consider a study that aims to understand the spread of a new health behavior in a social network. The researchers collected data on the adoption of the behavior among a group of individuals and the network connections between them. They applied the Linear Threshold Model, Independent Cascade Model, and Bass Diffusion Model to the data and compared the results.
The Linear Threshold Model provided insights into the role of social influence and peer pressure in the adoption of the health behavior. It identified key individuals in the network who had a high influence on others' adoption decisions.
The Independent Cascade Model provided insights into the dynamics of information spread in the network. It revealed the pathways through which the behavior spread and the timing of adoption events.
The Bass Diffusion Model provided insights into the overall adoption curve and the factors influencing the rate of adoption. It estimated the potential market size and the impact of different marketing strategies on adoption.
## Exercise
Think of a research question or real-world scenario where the spread of information, innovation, or behavior in a network is of interest. Choose one of the diffusion models discussed in this section (Linear Threshold Model, Independent Cascade Model, or Bass Diffusion Model) that you think would be most appropriate for studying this scenario. Explain your choice and briefly outline how you would apply the model to the scenario.
### Solution
One possible scenario is the spread of fake news on social media. In this case, the Independent Cascade Model would be most appropriate for studying the spread of information in a network. The model captures the idea that the spread of information happens through a series of independent events, where each node has a certain probability of sharing the fake news with its neighbors. To apply the model, we would need data on the network connections between users and the timing of information sharing events. We could then simulate the spread of the fake news using the Independent Cascade Model and analyze the dynamics of the spread, identify influential users, and evaluate strategies to mitigate the spread of fake news.
# 3. Network Influence and Contagion
Influence and contagion are key concepts in network analysis. Influence refers to the ability of one node in a network to affect the behavior or opinions of other nodes. Contagion refers to the spread of behaviors or opinions through a network.
Influence can be exerted through various mechanisms, such as direct communication, social norms, or social pressure. Understanding influence in a network can help us identify influential individuals or groups, predict the spread of behaviors or opinions, and design interventions or strategies to promote desired behaviors or opinions.
Contagion can occur through different processes, such as information diffusion, social contagion, or epidemic spreading. Understanding contagion in a network can help us analyze the dynamics of the spread, identify key factors or nodes that facilitate or hinder the spread, and develop models or strategies to control or prevent the spread.
Influence and contagion are often intertwined in network analysis. Influential nodes can play a crucial role in initiating or accelerating the spread of behaviors or opinions. The spread of behaviors or opinions, in turn, can influence the behavior or opinions of other nodes in the network.
For example, let's consider a study that aims to understand the spread of a new technology in a social network. The researchers collected data on the adoption of the technology among a group of individuals and the network connections between them. They analyzed the network to identify influential individuals who were more likely to adopt the technology early on. They then simulated the spread of the technology using a contagion model, taking into account the influence of the identified influential individuals. The results showed that the spread of the technology was accelerated by the influence of these individuals, and that their adoption behavior influenced the adoption behavior of others in the network.
## Exercise
Think of a real-world scenario where influence and contagion in a network are of interest. Describe the scenario and explain why understanding influence and contagion in the network would be valuable.
### Solution
One possible scenario is the spread of healthy eating habits in a community. Understanding influence and contagion in the network would be valuable for designing interventions or strategies to promote healthy eating habits. By identifying influential individuals who are already practicing healthy eating habits, we can target them with interventions or campaigns to encourage them to spread their habits to others in the network. By understanding the dynamics of the spread and the factors that facilitate or hinder it, we can develop effective strategies to promote healthy eating habits throughout the community.
# 3.1. Influence and Contagion in Networks
Influence and contagion are fundamental concepts in network analysis. They play a crucial role in understanding how behaviors, opinions, or information spread through a network.
Influence refers to the ability of one node in a network to affect the behavior or opinions of other nodes. It can be exerted through various mechanisms, such as direct communication, social norms, or social pressure. Influential nodes have a higher likelihood of influencing others and can play a key role in initiating or accelerating the spread of behaviors or opinions.
Contagion, on the other hand, refers to the spread of behaviors, opinions, or information through a network. It can occur through different processes, such as information diffusion, social contagion, or epidemic spreading. Contagion can be influenced by factors such as the structure of the network, the characteristics of the nodes, or the nature of the behavior or information being spread.
Understanding influence and contagion in a network can provide valuable insights into the dynamics of the spread and help us identify key factors or nodes that facilitate or hinder the spread. It can also help us predict the spread of behaviors or opinions, design interventions or strategies to promote desired behaviors or opinions, and control or prevent the spread of unwanted behaviors or opinions.
For example, consider a social media platform where users can share and interact with posts. Understanding influence and contagion in this network can help the platform identify influential users who have a higher likelihood of influencing others' engagement with posts. By promoting the posts of these influential users, the platform can increase the reach and impact of the content. Additionally, analyzing the contagion patterns can help the platform understand how information or trends spread through the network and design algorithms or features to enhance the spread of desirable content.
## Exercise
Think of a real-world scenario where influence and contagion in a network are of interest. Describe the scenario and explain why understanding influence and contagion in the network would be valuable.
### Solution
One possible scenario is the spread of a new product or innovation in a social network. Understanding influence and contagion in the network would be valuable for marketing and adoption strategies. By identifying influential individuals or groups who are more likely to adopt and promote the product, companies can target them with marketing campaigns or incentives to accelerate the spread. Analyzing the contagion patterns can help companies understand the dynamics of the spread and identify key factors or nodes that facilitate or hinder the adoption process. This knowledge can inform the design of effective marketing strategies and interventions to maximize the product's adoption and success.
# 3.2. Identifying Influential Nodes
Identifying influential nodes in a network is an important task in network analysis. Influential nodes have a higher ability to affect the behavior or opinions of other nodes and can play a key role in the spread of behaviors or information.
There are several methods and metrics that can be used to identify influential nodes in a network. Some common approaches include:
1. Degree centrality: This metric measures the number of connections a node has in the network. Nodes with a high degree centrality are often considered influential because they have a larger reach and can potentially influence more nodes.
2. Betweenness centrality: This metric measures the extent to which a node lies on the shortest paths between other nodes in the network. Nodes with a high betweenness centrality act as bridges or connectors between different parts of the network and can control the flow of information or behaviors.
3. Eigenvector centrality: This metric takes into account both the number of connections a node has and the centrality of its neighbors. Nodes with a high eigenvector centrality are connected to other influential nodes and can have a significant impact on the network.
4. PageRank: This algorithm, originally developed by Google, assigns a score to each node based on the importance of the nodes that link to it. Nodes with a high PageRank score are considered influential because they are connected to other influential nodes.
For example, let's consider a social network where individuals can influence each other's opinions on political issues. By analyzing the network and using the degree centrality metric, we can identify individuals who have a large number of connections and are likely to have a wide reach. These individuals can be targeted for political campaigns or initiatives to promote specific opinions or behaviors.
## Exercise
Consider the following network:
```
A -- B -- C
\ | /
\ | /
D
```
Using the degree centrality metric, identify the most influential node in the network.
### Solution
The most influential node in the network is node D, as it has the highest degree centrality. Node D is connected to three other nodes, while nodes A, B, and C are each connected to two other nodes.
# 3.3. Measuring Contagion and Spreading Processes
Measuring contagion and spreading processes in a network is essential for understanding how behaviors or information spread and influence other nodes. There are several metrics and measures that can be used to quantify the extent and impact of contagion in a network.
One common metric is the contagion rate, which measures the proportion of nodes that have adopted a behavior or received information. The contagion rate can be calculated at different time points or stages of the spreading process to understand how the behavior or information spreads over time.
Another metric is the average path length, which measures the average number of steps or connections it takes for a behavior or information to spread from one node to another. A shorter average path length indicates a faster and more efficient spreading process.
The reproduction number, also known as the basic reproduction ratio, measures the average number of new infections or adoptions caused by a single infected or adopting node. A reproduction number greater than 1 indicates that the behavior or information is spreading, while a reproduction number less than 1 indicates that the spreading process is dying out.
For example, let's consider a network where a new product is being introduced. By measuring the contagion rate, average path length, and reproduction number, we can assess the effectiveness of the product's marketing campaign and understand how quickly and widely the product is being adopted by consumers.
## Exercise
Consider the following spreading process in a network:
```
A -- B -- C -- D
```
At time 0, node A adopts a behavior. At time 1, node A influences node B, who then influences node C, and so on. Calculate the contagion rate at time 2 and the average path length of the spreading process.
### Solution
At time 2, nodes A, B, and C have adopted the behavior, while node D has not. The contagion rate at time 2 is 75% (3 out of 4 nodes).
The average path length of the spreading process is 2, as it takes 2 steps for the behavior to spread from node A to node C.
# 3.4. Real-world Examples of Influence and Contagion
Influence and contagion can be observed in various real-world scenarios, ranging from the spread of diseases to the adoption of new technologies. Understanding these examples can provide insights into how network-based diffusion analysis can be applied in different domains.
One example is the spread of a viral disease in a population. By studying the network of interactions between individuals, researchers can identify influential nodes that are more likely to transmit the disease to others. This information can help in designing targeted interventions and prevention strategies.
Another example is the adoption of new technologies or innovations. In a social network, individuals are influenced by their peers' choices and behaviors. By analyzing the network structure and identifying influential nodes, researchers can predict the spread of the innovation and develop strategies to maximize its adoption.
Social media platforms also provide rich data for studying influence and contagion. By analyzing the network of connections between users and their interactions, researchers can understand how information and trends spread through the platform. This knowledge can be used for targeted advertising, content recommendation, and identifying influential users.
For example, consider the adoption of electric vehicles (EVs). By studying the network of interactions between individuals and their opinions on EVs, researchers can identify influential nodes who are more likely to influence others' attitudes towards EVs. This information can be used to design marketing campaigns and policies to promote EV adoption.
## Exercise
Think of a real-world scenario where influence and contagion play a significant role. Describe the scenario and explain how network-based diffusion analysis can provide insights and solutions.
### Solution
One example is the spread of misinformation on social media platforms. False information can quickly spread through networks, leading to negative consequences such as public panic or misinformation-based decisions. Network-based diffusion analysis can help identify influential nodes that are more likely to spread false information and develop strategies to counteract its spread, such as targeted fact-checking or promoting reliable sources of information.
# 4. Information Spread in Networks
Information spread refers to the process of how information, ideas, or opinions are transmitted through a network of individuals or entities. Understanding how information spreads in networks is crucial in various domains, such as marketing, public health, and social media analysis.
There are different types of information spread that can occur in networks. These include:
1. **Word-of-mouth spread**: This occurs when individuals share information with their immediate connections, such as friends, family, or colleagues. It is a common form of information spread in social networks.
2. **Influence-based spread**: This occurs when influential individuals or opinion leaders in a network actively promote or endorse certain information, leading to its wider dissemination. These influential nodes have a higher impact on the spread of information compared to others.
3. **Viral spread**: This occurs when information rapidly spreads across a network, often driven by the novelty, emotional appeal, or controversial nature of the content. Viral spread can lead to the rapid dissemination of information to a large audience.
4. **Selective spread**: This occurs when individuals selectively share information with specific subsets of their connections based on their interests, beliefs, or preferences. Selective spread can result in the formation of echo chambers or filter bubbles within a network.
Predicting information cascades, which refer to the sequential spread of information through a network, is an important aspect of information spread analysis. Researchers use various models and techniques to predict the likelihood and extent of information cascades, taking into account factors such as network structure, node characteristics, and content attributes.
For example, consider a social media platform where users can share posts and engage with each other's content. By analyzing the network of connections between users and the patterns of information spread, researchers can predict the likelihood of a post going viral and reaching a large audience. This information can be valuable for content creators, marketers, and platform administrators.
## Exercise
Think of a real-world scenario where predicting information cascades can be beneficial. Describe the scenario and explain how network-based diffusion analysis can help in predicting the spread of information.
### Solution
One example is predicting the spread of public health information during disease outbreaks. By analyzing the network of interactions between individuals and their engagement with health-related content, researchers can predict the likelihood of information cascades and identify influential nodes who can help in disseminating accurate information to a wider audience. This can aid in designing effective communication strategies and combating misinformation during public health emergencies.
# 4.1. Types of Information Spread
There are different types of information spread that can occur in networks. Understanding these types can help us analyze and predict how information spreads in various contexts.
1. **Word-of-mouth spread**: This type of information spread occurs when individuals share information with their immediate connections, such as friends, family, or colleagues. It is a common form of information transmission in social networks. For example, if you recommend a book to a friend and they recommend it to their friends, the information about the book is spreading through word-of-mouth.
2. **Influence-based spread**: In this type of information spread, influential individuals or opinion leaders actively promote or endorse certain information, leading to its wider dissemination. These influential nodes have a higher impact on the spread of information compared to others. For example, a celebrity endorsing a product on social media can significantly influence the spread of information about that product.
3. **Viral spread**: Viral spread refers to the rapid and widespread dissemination of information across a network. It is often driven by the novelty, emotional appeal, or controversial nature of the content. Viral spread can lead to the information reaching a large audience in a short period. For example, a funny video or a shocking news story can go viral on social media platforms.
4. **Selective spread**: Selective spread occurs when individuals selectively share information with specific subsets of their connections based on their interests, beliefs, or preferences. This type of spread can result in the formation of echo chambers or filter bubbles within a network, where individuals are exposed to a limited range of information that aligns with their existing views. For example, individuals may share political news articles only with like-minded friends, reinforcing their own beliefs.
Understanding these different types of information spread can help us analyze and predict the dynamics of information dissemination in networks. By studying the patterns and mechanisms of information spread, we can develop strategies to optimize the spread of desired information and mitigate the spread of misinformation or harmful content.
# 4.2. Predicting Information Cascades
Predicting information cascades in networks is a challenging task, but it can provide valuable insights into the dynamics of information spread. An information cascade occurs when a piece of information spreads through a network, with individuals adopting the information based on the behavior of their neighbors.
To predict information cascades, we can use various methods and techniques. One common approach is to model the spread of information as a diffusion process on the network. Diffusion models, such as the Independent Cascade Model and the Linear Threshold Model, can be used to simulate the spread of information and predict the likelihood of adoption by different nodes in the network.
In the Independent Cascade Model, each edge in the network has a probability of transmitting the information to its neighboring node. This probability can be based on factors such as the strength of the relationship between nodes or the similarity of their attributes. By simulating multiple iterations of the diffusion process, we can estimate the probability of adoption for each node and identify the most influential nodes in the cascade.
The Linear Threshold Model, on the other hand, assigns a threshold value to each node in the network. When the cumulative influence from its neighbors exceeds the threshold, the node adopts the information. By iteratively updating the states of the nodes based on their neighbors' influence, we can simulate the cascade and predict the adoption probabilities.
In addition to diffusion models, machine learning algorithms can also be used to predict information cascades. By training a model on historical cascade data, we can learn patterns and features that are indicative of successful cascades. These features can include network structure, node attributes, and temporal dynamics. The trained model can then be used to predict the likelihood of a new cascade based on its characteristics.
Overall, predicting information cascades requires a combination of network analysis, diffusion modeling, and machine learning techniques. By understanding the underlying mechanisms of information spread and leveraging predictive models, we can gain insights into the dynamics of cascades and make informed decisions in various domains, such as marketing, public health, and social media analysis.
## Exercise
Consider a social network where individuals share news articles with their friends. You have access to the network structure and historical data on article sharing. Your task is to predict the likelihood of a new article going viral in the network.
1. What diffusion model would you use to simulate the spread of the article?
2. What features or factors would you consider when training a machine learning model to predict viral articles?
### Solution
1. To simulate the spread of the article, you can use the Independent Cascade Model or the Linear Threshold Model. These models capture the dynamics of information diffusion and can estimate the probability of adoption by different nodes in the network.
2. When training a machine learning model to predict viral articles, you can consider various features and factors, such as:
- Network structure: The connectivity and centrality of nodes in the network can influence the spread of articles.
- Node attributes: The characteristics of individuals, such as their interests, demographics, or previous sharing behavior, can affect their likelihood of sharing viral articles.
- Content features: The content of the articles, such as the topic, sentiment, or novelty, can impact their virality.
- Temporal dynamics: The time of article sharing, such as the day of the week or the time of day, can play a role in predicting virality.
By incorporating these features into the machine learning model, you can learn patterns and relationships that can help predict the likelihood of a new article going viral in the network.
# 4.3. Factors Affecting Information Spread
The spread of information in networks is influenced by various factors. Understanding these factors can help us analyze and predict the dynamics of information cascades. Here are some key factors that affect information spread:
1. Network structure: The structure of the network, including the connectivity, density, and centrality of nodes, plays a crucial role in information spread. Nodes with higher degrees or centrality are more likely to receive and transmit information, leading to faster and wider cascades.
2. Node attributes: The characteristics of individual nodes, such as their influence, credibility, and susceptibility to peer pressure, can impact the spread of information. Nodes with higher influence or credibility are more likely to initiate and propagate cascades.
3. Content characteristics: The content of the information itself, including its relevance, novelty, emotional appeal, and controversy, can affect its spread. Information that is perceived as more relevant, novel, or emotionally engaging is more likely to be shared and spread rapidly.
4. Social influence: The behavior and opinions of peers and social ties can influence an individual's decision to adopt or reject information. Social influence mechanisms, such as social reinforcement, social proof, and social norms, can shape the spread of information in networks.
5. Temporal dynamics: The timing and sequence of information adoption can impact its spread. Factors such as the timing of exposure, the order of adoption by influential nodes, and the decay of information over time can influence the dynamics of cascades.
6. External events and context: The external environment, including current events, cultural norms, and social context, can affect the spread of information. External events that align with the content or trigger emotions can amplify cascades, while cultural or social factors can shape the adoption or rejection of information.
By considering these factors and analyzing their interplay, we can gain insights into the mechanisms of information spread and develop strategies to influence or control cascades in various domains, such as marketing, public health, and social media analysis.
## Exercise
Think of a recent example of information spread in a network, such as a viral social media post or a news article that gained widespread attention. Analyze the factors that might have influenced the spread of this information based on the factors discussed in this section.
### Solution
For example, let's consider a viral social media post that gained widespread attention. The post was a video clip showing a heartwarming act of kindness, where a stranger helped an elderly person in need. The factors that might have influenced the spread of this information are:
1. Network structure: The post was shared by influential users with a large number of followers, who acted as "super-spreaders" in the network. Their high connectivity and centrality allowed the information to reach a wide audience quickly.
2. Node attributes: The credibility and trustworthiness of the users who shared the post played a role in its spread. Users with a reputation for sharing reliable and heartwarming content were more likely to be trusted and followed by others.
3. Content characteristics: The emotional appeal and positive nature of the video clip made it highly shareable. The heartwarming act of kindness resonated with people's emotions and motivated them to share it with their friends and followers.
4. Social influence: The behavior of peers and social ties influenced individuals' decision to share the post. Seeing others sharing the video clip created a sense of social proof and reinforced the perception that it was worth sharing.
5. Temporal dynamics: The timing of the post's release coincided with a time when people were seeking positive and uplifting content. The post provided a welcome distraction and an opportunity for people to spread positivity during a challenging period.
6. External events and context: The post aligned with the cultural norms and values of kindness and empathy. It tapped into the collective desire for heartwarming stories and served as a reminder of the power of small acts of kindness.
By analyzing these factors, we can understand why the post gained widespread attention and identify strategies to create and promote content that is more likely to spread in networks.
# 4.4. Social Media and Information Diffusion
Social media platforms have revolutionized the way information spreads in networks. With billions of users and a constant stream of content, social media platforms provide an unprecedented opportunity for information diffusion. Here are some key characteristics of social media and their impact on information spread:
1. Virality: Social media platforms enable information to spread rapidly and widely through viral mechanisms. A single post or tweet can reach millions of users within hours, creating a cascade of shares and retweets.
2. Network effects: Social media platforms are built on network effects, where the value of the platform increases as more users join and engage with content. This creates a feedback loop where popular content attracts more users, leading to even greater reach and exposure.
3. Algorithmic curation: Social media platforms use algorithms to curate and personalize content for each user. These algorithms prioritize content based on factors such as relevance, engagement, and user preferences. This can amplify the spread of information by promoting popular or trending content to a wider audience.
4. Echo chambers and filter bubbles: Social media platforms can create echo chambers and filter bubbles, where users are primarily exposed to content that aligns with their existing beliefs and preferences. This can lead to the selective spread of information and the reinforcement of existing biases.
5. User-generated content: Social media platforms rely on user-generated content, allowing anyone to create and share information. This democratization of content creation gives individuals the power to shape narratives and influence public opinion.
6. Real-time feedback: Social media platforms provide real-time feedback through likes, comments, and shares. This feedback loop can influence the spread of information by signaling popularity, credibility, and social validation.
7. Amplification of emotions: Social media platforms are highly emotive environments, where content that elicits strong emotions is more likely to be shared. Emotional content, such as inspiring stories, shocking news, or humorous videos, has a higher chance of going viral.
Understanding the dynamics of information spread on social media is essential for marketers, researchers, and policymakers. By leveraging the unique characteristics of social media platforms, we can design strategies to maximize the reach and impact of information campaigns, identify influential users, and mitigate the spread of misinformation and harmful content.
## Exercise
Think of a recent example of information spread on social media, such as a viral hashtag or a controversial news article. Analyze how the characteristics of social media discussed in this section might have influenced the spread of this information.
### Solution
For example, let's consider a recent viral hashtag campaign on social media. The hashtag was #BlackLivesMatter, which gained widespread attention and sparked conversations about racial injustice and police brutality. The characteristics of social media that influenced the spread of this hashtag are:
1. Virality: The hashtag spread rapidly and widely across social media platforms, with millions of users sharing posts and using the hashtag in their own content. The viral nature of the hashtag allowed it to reach a global audience and generate widespread awareness.
2. Network effects: The popularity and engagement with the hashtag attracted more users to join the conversation and share their own experiences and perspectives. The network effects of social media platforms amplified the reach and impact of the hashtag.
3. Algorithmic curation: Social media algorithms played a role in promoting and amplifying the visibility of the hashtag. The algorithms identified the high engagement and relevance of the hashtag and prioritized it in users' feeds, leading to increased exposure and participation.
4. Echo chambers and filter bubbles: The hashtag campaign broke through echo chambers and filter bubbles by reaching users with diverse backgrounds and perspectives. It created a space for dialogue and mobilized individuals who were previously unaware or disconnected from the issue.
5. User-generated content: The hashtag campaign empowered individuals to share their personal stories, opinions, and calls to action. User-generated content played a critical role in humanizing the issue and fostering empathy and solidarity among users.
6. Real-time feedback: The hashtag campaign received real-time feedback through likes, comments, and shares. This feedback provided social validation and encouraged more users to engage with the hashtag, contributing to its spread and longevity.
7. Amplification of emotions: The hashtag campaign evoked strong emotions, such as anger, empathy, and solidarity. The emotional resonance of the campaign motivated users to share the hashtag and participate in discussions, driving its viral spread.
By analyzing these characteristics, we can understand why the hashtag campaign was successful in raising awareness and mobilizing individuals. This analysis can inform future social media campaigns and strategies for social change.
# 5. Network Dynamics and Evolution
Networks are not static entities. They evolve and change over time, influenced by various factors such as growth, interactions between nodes, and external events. Understanding network dynamics and evolution is crucial for analyzing and predicting the behavior of complex systems.
5.1 Network Growth and Change
Networks can grow in different ways. One common model of network growth is preferential attachment, where new nodes are more likely to connect to existing nodes with a high degree. This leads to the formation of hubs, nodes with a large number of connections. Another model is random attachment, where new nodes connect to existing nodes randomly, resulting in a more uniform degree distribution.
Networks can also change through the addition or removal of nodes and edges. Nodes can join or leave the network, and edges can be formed or severed. These changes can be driven by various factors, such as the introduction of new technologies, the emergence of new relationships, or the dissolution of existing connections.
5.2 Network Centrality and Its Evolution
Centrality measures the importance or influence of nodes in a network. There are various centrality measures, including degree centrality, betweenness centrality, and eigenvector centrality. These measures can help identify key nodes in a network and understand their role in information diffusion, influence, and connectivity.
The centrality of nodes can change over time as the network evolves. New nodes may become central due to their connections to other important nodes, while previously central nodes may lose their influence. Tracking the evolution of centrality can provide insights into the dynamics of a network and the changing roles of its nodes.
5.3 Network Cascades and Phase Transitions
Network cascades occur when a change or event spreads through a network, influencing the behavior or state of nodes. Examples of network cascades include the spread of diseases, the diffusion of innovations, and the propagation of rumors.
Network cascades can exhibit phase transitions, where a small change in the initial conditions or parameters of the cascade leads to a dramatic change in its behavior. Understanding phase transitions can help predict the likelihood and extent of cascades in a network, as well as identify strategies to control or mitigate their effects.
5.4 Case Studies of Network Dynamics
To illustrate the concepts of network dynamics and evolution, let's look at some case studies:
1. Social networks: Social networks such as Facebook and Twitter have undergone significant changes in their structure and dynamics. For example, the introduction of new features like the news feed or the implementation of algorithms for content curation have influenced the way information spreads and the patterns of user interactions.
2. Transportation networks: Transportation networks, such as road or airline networks, evolve as new routes are added or removed. Changes in transportation networks can have significant implications for accessibility, traffic flow, and the spread of diseases.
3. Collaboration networks: Collaboration networks among scientists or researchers can change as new collaborations are formed or existing collaborations dissolve. Studying the evolution of collaboration networks can provide insights into the dynamics of knowledge creation and dissemination.
By analyzing these case studies, we can gain a deeper understanding of how networks change over time and the factors that drive their evolution. This knowledge can inform the design of interventions, policies, and strategies to optimize network performance and resilience.
# 5.1. Network Growth and Change
Networks can grow in different ways. One common model of network growth is preferential attachment, where new nodes are more likely to connect to existing nodes with a high degree. This leads to the formation of hubs, nodes with a large number of connections. Another model is random attachment, where new nodes connect to existing nodes randomly, resulting in a more uniform degree distribution.
Networks can also change through the addition or removal of nodes and edges. Nodes can join or leave the network, and edges can be formed or severed. These changes can be driven by various factors, such as the introduction of new technologies, the emergence of new relationships, or the dissolution of existing connections.
For example, consider a social network where individuals can form connections with their friends. In a preferential attachment model, new individuals are more likely to connect with individuals who already have many friends. This leads to the formation of popular individuals who act as hubs in the network. On the other hand, in a random attachment model, new individuals connect with existing individuals randomly, resulting in a more even distribution of connections.
## Exercise
Think about a real-world network that you are familiar with (e.g., a social network, a transportation network, a collaboration network). How do you think this network grows and changes over time? What factors contribute to its growth and change?
### Solution
The growth and change of a social network can be influenced by various factors, such as the introduction of new social media platforms, the formation and dissolution of friendships, and the migration of individuals. These factors can lead to the addition or removal of nodes and edges in the network, resulting in changes in its structure and dynamics.
# 5.2. Network Centrality and Its Evolution
Centrality is a measure of the importance or influence of a node in a network. There are several different centrality measures that capture different aspects of node importance. One common centrality measure is degree centrality, which is simply the number of connections a node has. Nodes with a high degree centrality are often considered to be more important or influential in the network.
Centrality can also evolve over time as the network changes. Nodes that were once central may become less central, and vice versa. This can happen due to the addition or removal of nodes and edges, as well as changes in the distribution of connections. Understanding how centrality evolves can provide insights into the dynamics of the network and the changing roles of nodes.
For example, consider a collaboration network among researchers. Initially, a few well-established researchers may have a high degree centrality due to their extensive collaborations. However, as new researchers enter the field and form collaborations, the centrality of these established researchers may decrease. On the other hand, new researchers who quickly form collaborations with many others may see their centrality increase over time.
## Exercise
Think about a network that you are familiar with. How do you think the centrality of nodes in this network evolves over time? Can you identify any specific nodes that have experienced changes in centrality? Why do you think these changes have occurred?
### Solution
In a transportation network, the centrality of nodes may evolve as new routes are added or existing routes are modified. Nodes that were once central may become less central if alternative routes become available. On the other hand, nodes that were previously less central may become more central if they become key hubs for connecting different routes. These changes in centrality can occur due to changes in travel patterns, population growth, or infrastructure development.
# 5.3. Network Cascades and Phase Transitions
Network cascades occur when a change or behavior spreads through a network, influencing the behavior of connected nodes. Cascades can range from small, localized changes to large-scale, global shifts in the network. Understanding cascades is important for predicting and managing the spread of information, diseases, and other phenomena in networks.
Phase transitions are critical points in a network where a small change in one variable can lead to a large change in the behavior of the network as a whole. These transitions can occur when the density or connectivity of the network reaches a certain threshold. At this threshold, the network undergoes a qualitative change in its behavior, such as a sudden increase in cascades or a shift in the dominant behavior.
For example, consider a social network where individuals can adopt a new behavior. Initially, there may be only a few individuals who have adopted the behavior, and the cascade is limited. However, as more individuals adopt the behavior, the cascade can reach a critical point where it spreads rapidly through the network, leading to a large-scale adoption of the behavior.
Phase transitions can also occur in other types of networks, such as biological networks or transportation networks. In a biological network, a small change in the connectivity of certain genes or proteins can lead to a large-scale change in the behavior of the entire system. In a transportation network, a small increase in traffic or congestion can lead to a sudden breakdown of the network, causing widespread delays and disruptions.
## Exercise
Think about a network that you are familiar with. Can you identify any phase transitions that occur in this network? What factors contribute to these phase transitions?
### Solution
In a social media network, a phase transition may occur when a certain number of individuals start sharing a particular post or hashtag. Initially, the post may have limited visibility and reach. However, once it reaches a critical number of shares or likes, it can rapidly spread through the network, reaching a large audience. Factors that contribute to this phase transition include the size of the network, the engagement of influential users, and the relevance of the content to the network's interests.
# 5.4. Case Studies of Network Dynamics
1. Case Study: Spread of a Viral Disease
In this case study, we will examine the spread of a viral disease through a social network. We will analyze how the structure of the network, the connectivity of individuals, and the behavior of infected individuals influence the spread of the disease. By studying the dynamics of the network, we can gain insights into how to control and prevent the spread of the disease.
2. Case Study: Financial Contagion
In this case study, we will investigate the phenomenon of financial contagion, where a financial crisis in one country or institution spreads to others. We will analyze the interconnectedness of financial networks, the transmission channels of contagion, and the factors that contribute to the amplification or mitigation of contagion. By understanding the dynamics of financial networks, we can develop strategies to prevent and manage financial crises.
3. Case Study: Social Influence in Online Communities
In this case study, we will explore the dynamics of social influence in online communities. We will analyze how individuals influence each other's behavior and opinions through social media platforms. We will examine the role of influential users, the formation of echo chambers, and the spread of misinformation. By studying the dynamics of online communities, we can develop strategies to promote positive behavior and mitigate the negative effects of social influence.
4. Case Study: Transportation Network Optimization
In this case study, we will examine the dynamics of transportation networks and the optimization of traffic flow. We will analyze how changes in the network, such as the addition of new roads or the implementation of traffic management strategies, impact the efficiency and reliability of transportation systems. By understanding the dynamics of transportation networks, we can improve the design and operation of transportation systems to reduce congestion and improve travel times.
These case studies provide real-world examples of network dynamics and the practical applications of network analysis. By studying these examples, you will gain a deeper understanding of how networks evolve and how their dynamics impact various phenomena.
# 6. Diffusion in Multiplex Networks
Diffusion in multiplex networks refers to the spread of information, behaviors, or phenomena through multiple layers of connections. The presence of multiple layers in a network can affect the dynamics of diffusion and lead to different patterns of spread compared to single-layer networks.
To understand diffusion in multiplex networks, we need to consider how information or behaviors can propagate through each layer of the network and how these layers interact with each other. The interactions between layers can either facilitate or hinder the spread of diffusion, depending on the structure and dynamics of the multiplex network.
In the following sections, we will explore different aspects of diffusion in multiplex networks, including the understanding of multiplex networks, diffusion processes in multiplex networks, influence and contagion in multiplex networks, and applications of multiplex networks in diffusion analysis. By studying these topics, you will gain a comprehensive understanding of how diffusion operates in complex, interconnected networks.
# 6.1. Understanding Multiplex Networks
Multiplex networks are networks that consist of multiple layers or types of connections between nodes. Each layer represents a different type of relationship, interaction, or mode of communication between nodes. These layers can be represented by different types of edges, such as social ties, communication channels, or transportation routes.
Multiplex networks can capture the complexity and heterogeneity of real-world systems, where nodes interact with each other through multiple channels or modes. For example, in a social network, individuals may have different types of relationships with each other, such as friendship, family ties, or professional connections. These different types of relationships can be represented as different layers in a multiplex network.
Understanding multiplex networks involves studying the structure and dynamics of each layer, as well as the interactions between layers. The structure of each layer can be characterized by various network measures, such as degree distribution, clustering coefficient, or centrality measures. The dynamics of each layer can be analyzed using diffusion models or other dynamical processes.
The interactions between layers can be characterized by the presence of inter-layer edges, which connect nodes across different layers. These inter-layer edges can represent relationships or interactions that span multiple layers, such as individuals who have connections in both social and professional networks.
By studying multiplex networks, we can gain insights into the complex patterns of interactions and dynamics that occur in real-world systems. This understanding can help us analyze and predict the spread of information, behaviors, or phenomena in multiplex networks.
# 6.2. Diffusion in Multiplex Networks
Diffusion in multiplex networks refers to the spread of information, behaviors, or phenomena through multiple layers of connections. The presence of multiple layers in a network can affect the dynamics of diffusion and lead to different patterns of spread compared to single-layer networks.
In multiplex networks, diffusion can occur independently within each layer or propagate across layers through inter-layer connections. The dynamics of diffusion in each layer can be influenced by the structure and characteristics of that layer, such as the connectivity of nodes, the strength of ties, or the presence of influential nodes.
The interactions between layers can either facilitate or hinder the spread of diffusion. For example, if a behavior or information spreads rapidly in one layer, it can influence the spread in other layers through inter-layer connections. On the other hand, if there are barriers or constraints between layers, diffusion may be limited or restricted.
To study diffusion in multiplex networks, we can adapt existing diffusion models, such as the independent cascade model or the linear threshold model, to account for the presence of multiple layers. These models can capture the dynamics of diffusion within each layer and the interactions between layers.
By studying diffusion in multiplex networks, we can gain insights into how information or behaviors spread through complex, interconnected systems. This understanding can help us design strategies to promote or control diffusion in multiplex networks.
# 6.3. Influence and Contagion in Multiplex Networks
Influence and contagion are important phenomena in the spread of information or behaviors in multiplex networks. Influence refers to the ability of a node to affect the behavior or opinions of other nodes, while contagion refers to the spread of a behavior or phenomenon through social or network connections.
In multiplex networks, influence and contagion can occur within each layer and propagate across layers. The structure and dynamics of each layer can influence the extent and speed of influence and contagion. For example, nodes with high centrality or connectivity in one layer may have a greater influence on other nodes in that layer and in other layers.
The interactions between layers can also affect influence and contagion. If there are strong inter-layer connections, influence or contagion in one layer can spread to other layers, amplifying or modifying the spread. On the other hand, if there are weak or limited inter-layer connections, influence or contagion may be confined to specific layers.
To study influence and contagion in multiplex networks, we can analyze network measures and metrics that capture the influence or contagion potential of nodes or layers. These measures can include centrality measures, such as degree centrality or betweenness centrality, or diffusion measures, such as the number of infected nodes or the speed of spread.
By understanding the dynamics of influence and contagion in multiplex networks, we can develop strategies to promote positive behaviors, control the spread of negative behaviors, or identify influential nodes or layers for targeted interventions.
# 6.4. Applications of Multiplex Networks in Diffusion Analysis
Multiplex networks have various applications in the analysis of diffusion processes. The presence of multiple layers in a network can provide additional information and insights into the dynamics of diffusion and the factors that influence its spread.
One application of multiplex networks in diffusion analysis is the study of information or behavior cascades. Cascades refer to the spread of information or behaviors through a network, where the adoption of a behavior by one node influences the adoption by neighboring nodes. Multiplex networks can capture the different channels or modes through which cascades occur, allowing us to analyze the interplay between different layers in the spread of cascades.
Another application is the identification of influential nodes or layers in diffusion processes. Influential nodes are nodes that have a significant impact on the spread of information or behaviors, while influential layers are layers that play a crucial role in the propagation of diffusion. By analyzing the structure and dynamics of multiplex networks, we can identify these influential nodes or layers and develop strategies to leverage or control their influence.
Multiplex networks can also be used to study the resilience or vulnerability of diffusion processes. The presence of multiple layers can provide redundancy or alternative pathways for diffusion, making the process more robust to disruptions or failures. On the other hand, the interactions between layers can also create dependencies or vulnerabilities, where the failure of one layer can disrupt the entire diffusion process. By analyzing the structure and dynamics of multiplex networks, we can assess the resilience or vulnerability of diffusion processes and develop strategies to mitigate risks.
Overall, multiplex networks provide a powerful framework for analyzing and understanding diffusion processes in complex, interconnected systems. By studying these networks, we can gain insights into the dynamics of diffusion, identify influential nodes or layers, and develop strategies to promote or control the spread of information or behaviors.
# 7. Diffusion in Dynamic Networks
Dynamic networks refer to networks that change over time. In many real-world scenarios, networks are not static but evolve and adapt as nodes and edges are added, removed, or modified. Understanding diffusion processes in dynamic networks is crucial for analyzing and predicting the spread of information, behaviors, or diseases in various contexts.
In this section, we will explore the dynamics of networks and how they impact diffusion processes. We will discuss different types of dynamic networks, such as time-varying networks, where the structure of the network changes over time, and temporal networks, where edges have associated time stamps. We will also examine the challenges and opportunities in studying diffusion in dynamic networks.
7.1. Time-varying Networks
Time-varying networks are networks where the connections between nodes change over time. This can occur due to various factors, such as the formation or dissolution of relationships, the activation or deactivation of nodes, or the changing intensity of interactions. Time-varying networks can capture the temporal aspects of social interactions, communication patterns, or collaboration networks.
Studying diffusion in time-varying networks requires analyzing how the changing network structure influences the spread of information or behaviors. The timing and sequence of edge activations or deactivations can affect the speed, direction, and extent of diffusion. Analyzing time-varying networks often involves tracking the evolution of the network over time, identifying patterns or trends, and understanding the underlying mechanisms driving the changes.
7.2. Diffusion Processes in Dynamic Networks
Diffusion processes in dynamic networks refer to the spread of information, behaviors, or diseases over time. These processes are influenced by the changing network structure and the interactions between nodes. Understanding the dynamics of diffusion is crucial for predicting and controlling the spread of contagion, identifying influential nodes or edges, and designing effective intervention strategies.
There are various models and algorithms for studying diffusion processes in dynamic networks. These models often incorporate time-dependent parameters, such as activation probabilities or transmission rates, to capture the temporal aspects of diffusion. Algorithms for simulating or analyzing diffusion in dynamic networks can leverage techniques from graph theory, network science, and computational modeling.
7.3. Influence and Contagion in Dynamic Networks
Influence and contagion refer to the ability of nodes or edges to affect the behavior or state of their neighbors in a network. In dynamic networks, influence and contagion can vary over time as the network structure changes. Understanding how influence and contagion evolve in dynamic networks is crucial for identifying influential nodes or edges, predicting the spread of information or behaviors, and designing effective intervention strategies.
Analyzing influence and contagion in dynamic networks often involves tracking the changes in the network structure and quantifying the impact of nodes or edges on the diffusion process. Various measures and metrics, such as centrality measures or spreading efficiency, can be used to assess the influence or contagion potential of nodes or edges in dynamic networks.
7.4. Real-world Applications of Dynamic Networks in Diffusion Analysis
Dynamic networks have numerous applications in the analysis of diffusion processes in various domains. For example, in epidemiology, understanding the spread of diseases in dynamic networks can help in predicting and controlling outbreaks, designing vaccination strategies, or assessing the effectiveness of interventions. In social networks, studying the diffusion of information or behaviors in dynamic networks can provide insights into the dynamics of social influence, the formation of social norms, or the spread of innovations.
Other domains where dynamic networks play a crucial role in diffusion analysis include transportation networks, communication networks, online social networks, and biological networks. By analyzing and modeling the dynamics of these networks, researchers and practitioners can gain valuable insights into the spread of information, behaviors, or diseases, and develop strategies to promote or control diffusion processes.
# 8. Network-based Diffusion Analysis in Practice
Network-based diffusion analysis involves applying the concepts, models, and techniques of diffusion analysis to real-world networks. This section focuses on the practical aspects of conducting network-based diffusion analysis, including data collection and preparation, network visualization and analysis tools, modeling and simulation techniques, and case studies of network-based diffusion analysis.
8.1. Data Collection and Preparation
Data collection is a critical step in network-based diffusion analysis. It involves gathering data on the network structure, node attributes, and diffusion processes of interest. The choice of data collection methods depends on the specific context and research questions. Common data sources for network-based diffusion analysis include social media platforms, communication logs, surveys, and administrative records.
Data preparation involves cleaning, transforming, and organizing the collected data for analysis. This may include removing duplicates or outliers, standardizing data formats, and creating network representations. Data preparation also involves defining the variables or attributes of interest, such as node characteristics, edge weights, or diffusion outcomes.
8.2. Network Visualization and Analysis Tools
Network visualization and analysis tools are essential for exploring and interpreting network-based diffusion analysis results. These tools enable researchers to visualize the network structure, diffusion processes, and other relevant attributes. They also provide functionalities for analyzing network properties, identifying influential nodes or edges, and simulating diffusion scenarios.
There are various network visualization and analysis tools available, ranging from general-purpose software to specialized packages for specific domains. Some popular tools include Gephi, Cytoscape, NetworkX, and igraph. These tools often provide user-friendly interfaces, customizable visualizations, and a wide range of network analysis algorithms.
8.3. Modeling and Simulation Techniques
Modeling and simulation techniques are fundamental to network-based diffusion analysis. They allow researchers to simulate and analyze the spread of information, behaviors, or diseases in networks. Modeling involves constructing mathematical or computational models that capture the key dynamics and mechanisms of diffusion. Simulation involves running these models to generate synthetic or hypothetical diffusion scenarios.
There are various modeling and simulation techniques available for network-based diffusion analysis. These include agent-based models, epidemic models, influence maximization models, and game-theoretic models. Each technique has its strengths and limitations, and the choice depends on the specific research questions and context.
8.4. Case Studies of Network-based Diffusion Analysis
Case studies provide real-world examples of network-based diffusion analysis in action. They demonstrate how the concepts, models, and techniques discussed in this textbook can be applied to analyze and understand diffusion processes in different domains. Case studies often involve analyzing real datasets, conducting simulations, and interpreting the results in the context of the research questions.
Some examples of case studies in network-based diffusion analysis include studying the spread of information on social media platforms, analyzing the diffusion of innovations in organizational networks, or predicting the spread of diseases in contact networks. These case studies highlight the practical implications and insights gained from network-based diffusion analysis.
# 9. Ethical Considerations in Network Analysis
Ethical considerations play a crucial role in network analysis, including network-based diffusion analysis. Network analysis often involves collecting, analyzing, and interpreting data that involve individuals, communities, or organizations. It is essential to conduct network analysis ethically and responsibly to protect the privacy, confidentiality, and well-being of the individuals or groups involved.
9.1. Privacy and Confidentiality
Privacy and confidentiality are critical ethical considerations in network analysis. Network data may contain sensitive or personal information about individuals, such as social connections, communication patterns, or health records. It is essential to handle and store network data securely, anonymize or de-identify personal information, and obtain informed consent from participants when necessary.
Researchers and practitioners should also consider the potential risks and harms associated with network analysis. This includes the possibility of re-identification, unintended disclosures, or misuse of network data. Ethical guidelines and regulations, such as data protection laws or institutional review board requirements, provide guidance on ensuring privacy and confidentiality in network analysis.
9.2. Bias and Fairness in Data Collection and Analysis
Bias and fairness are important considerations in network data collection and analysis. Network data may be subject to various biases, such as selection bias, response bias, or sampling bias. These biases can affect the validity, reliability, and generalizability of network analysis results. It is crucial to minimize biases in data collection, sampling, and analysis to ensure fairness and representativeness.
Researchers and practitioners should also be aware of potential ethical issues related to bias and fairness in network analysis. This includes the potential for discrimination, stigmatization, or unfair treatment based on network attributes or characteristics. Ethical guidelines and best practices in data collection and analysis can help address these issues and promote fairness in network analysis.
9.3. Ethical Implications of Network-based Diffusion Analysis
Network-based diffusion analysis can have ethical implications, particularly when studying the spread of information, behaviors, or diseases in networks. Diffusion processes can influence individuals' beliefs, attitudes, or behaviors, and may have broader societal or public health implications. It is essential to consider the potential consequences and impacts of diffusion analysis on individuals, communities, or society as a whole.
Ethical considerations in network-based diffusion analysis include ensuring informed consent, protecting privacy and confidentiality, minimizing harm or risks, and promoting fairness and equity. Researchers and practitioners should also consider the potential unintended consequences or negative impacts of diffusion analysis, such as amplifying misinformation, reinforcing biases, or exacerbating health disparities.
9.4. Guidelines and Best Practices
Guidelines and best practices provide practical guidance on conducting network analysis ethically and responsibly. These guidelines often cover various aspects of network analysis, including data collection, data management, analysis methods, and result interpretation. They help researchers and practitioners navigate the ethical challenges and make informed decisions throughout the network analysis process.
Some examples of guidelines and best practices in network analysis include the Association of Internet Researchers' ethical guidelines, the Network Science Society's ethical guidelines, or institutional review board requirements. These guidelines emphasize the importance of informed consent, privacy protection, data anonymization, transparency, and accountability in network analysis.
# 10. Challenges and Future Directions
10.1. Limitations of Network-based Diffusion Analysis
Network-based diffusion analysis has its limitations and assumptions. Diffusion processes are complex and influenced by various factors, including individual attributes, social dynamics, and environmental factors. Network-based diffusion analysis often simplifies or abstracts these complexities to make modeling and analysis feasible. It is essential to acknowledge and understand the limitations of network-based diffusion analysis to interpret the results accurately.
Some limitations of network-based diffusion analysis include the assumptions of homogeneity, linearity, or independence in diffusion processes. These assumptions may not hold in real-world scenarios, where diffusion is influenced by diverse factors and interactions. Network-based diffusion analysis should be complemented with other methods, such as qualitative research or experimental studies, to provide a comprehensive understanding of diffusion processes.
10.2. Emerging Trends and Technologies
Emerging trends and technologies offer new opportunities for network-based diffusion analysis. Advances in data collection, data analysis, and computational modeling enable researchers and practitioners to study diffusion processes at a larger scale, higher resolution, or finer granularity. For example, the availability of large-scale digital traces, such as social media data or mobile phone records, provides rich sources of information for studying diffusion in real-time.
Other emerging trends and technologies in network-based diffusion analysis include machine learning, natural language processing, or network embedding techniques. These techniques can enhance the analysis and prediction of diffusion processes by leveraging the power of data-driven approaches. However, it is crucial to ensure the ethical and responsible use of these technologies in network analysis.
10.3. Future Applications and Impact
Network-based diffusion analysis has the potential to impact various domains and disciplines. The insights gained from studying diffusion processes in networks can inform decision-making, policy development, and intervention strategies. For example, in public health, network-based diffusion analysis can help in designing targeted interventions, predicting disease outbreaks, or evaluating the effectiveness of public health campaigns.
Future applications of network-based diffusion analysis may include areas such as social influence, opinion dynamics, organizational change, innovation diffusion, or online misinformation. By applying network-based diffusion analysis to these domains, researchers and practitioners can address pressing societal challenges, promote positive behaviors, and mitigate negative impacts.
10.4. Collaboration and Interdisciplinary Approaches
Collaboration and interdisciplinary approaches are essential for advancing network-based diffusion analysis. Diffusion processes are complex and multifaceted, requiring expertise from various disciplines, such as network science, sociology, computer science, epidemiology, or psychology. Collaboration between researchers, practitioners, policymakers, and stakeholders can foster innovation, knowledge exchange, and the translation of research findings into practice.
Interdisciplinary approaches in network-based diffusion analysis involve integrating theories, methods, and perspectives from different disciplines. This can include combining qualitative and quantitative methods, leveraging diverse data sources, or developing hybrid models that capture both network structure and individual attributes. Interdisciplinary collaboration can enrich the analysis, interpretation, and application of network-based diffusion analysis.
# 11. Conclusion and Next Steps
In this textbook, we have explored the fundamentals of network-based diffusion analysis. We have covered various topics, including network analysis, diffusion models, influence and contagion, information spread, network dynamics, diffusion in multiplex networks, diffusion in dynamic networks, practical aspects of diffusion analysis, ethical considerations, challenges, and future directions.
By studying network-based diffusion analysis, you have gained insights into the dynamics of diffusion processes, the factors that influence their spread, and the methods for analyzing and predicting diffusion in networks. You have learned about different models, techniques, and tools for studying diffusion, and how to apply them in practice.
11.1. Recap of Key Concepts
Let's recap some of the key concepts covered in this textbook:
- Network analysis: The study of the structure, properties, and dynamics of networks.
- Diffusion models: Mathematical or computational models that capture the spread of information, behaviors, or diseases in networks.
- Influence and contagion: The ability of nodes or edges to affect the behavior or state of their neighbors in a network.
- Information spread: The spread of information or behaviors through a network, often characterized by cascades or viral processes.
- Network dynamics: The evolution and change of networks over time, including network growth, centrality evolution, or cascades.
- Multiplex networks: Networks with multiple layers or types of connections, capturing different modes or channels of interaction.
- Dynamic networks: Networks that change over time, often due to the formation or dissolution of relationships or the activation or deactivation of nodes.
- Ethical considerations: The ethical implications and responsibilities in conducting network analysis, including privacy, fairness, and avoiding harm.
- Challenges and future directions: The limitations, emerging trends, and opportunities in network-based diffusion analysis.
11.2. Further Resources for Network-based Diffusion Analysis
To continue your exploration of network-based diffusion analysis, here are some recommended resources:
- Books:
- "Networks, Crowds, and Markets: Reasoning About a Highly Connected World" by David Easley and Jon Kleinberg
- "Diffusion of Innovations" by Everett M. Rogers
- "Social Network Analysis: Methods and Applications" by Stanley Wasserman and Katherine Faust
- Research papers and articles:
- "The Spread of Behavior in an Online Social Network Experiment" by James H. Fowler and Nicholas A. Christakis
- "The Dynamics of Protest Recruitment through an Online Network" by Sandra González-Bailón, Javier Borge-Holthoefer, Alejandro Rivero, and Yamir Moreno
- "Contagion Processes in Complex Networks" by Romualdo Pastor-Satorras and Alessandro Vespignani
- Online courses and tutorials:
- Coursera: "Social Network Analysis" by Lada Adamic
- edX: "Network Science" by Albert-László Barabási
- YouTube: "Introduction to Network Science" by Fil Menczer
11.3. Applying Network-based Diffusion Analysis in Different Fields
Network-based diffusion analysis has applications in various fields and domains. Here are some examples:
- Public health: Analyzing the spread of diseases, designing vaccination strategies, or evaluating the effectiveness of public health interventions.
- Marketing and advertising: Studying the diffusion of products or ideas, identifying influential individuals or communities, or designing viral marketing campaigns.
- Social media analysis: Understanding the spread of information or behaviors on social media platforms, detecting online communities or echo chambers, or combating misinformation.
- Organizational behavior: Analyzing the diffusion of innovations or changes in organizations, identifying key opinion leaders or change agents, or designing interventions for organizational change.
- Policy and decision-making: Informing policy development, predicting the impact of interventions, or evaluating the effectiveness of social programs.
11.4. Contributing to the Advancement of Network-based Diffusion Analysis
Network-based diffusion analysis is a rapidly evolving field with many open research questions and opportunities for contribution. Here are some ways you can contribute to the advancement of network-based diffusion analysis:
- Conduct empirical studies or experiments to validate and refine diffusion models.
- Develop new models or algorithms for analyzing diffusion in specific contexts or domains.
- Explore the interplay between network structure, individual attributes, and diffusion dynamics.
- Apply network-based diffusion analysis to real-world problems and evaluate its impact.
- Collaborate with researchers from different disciplines to tackle complex diffusion challenges.
- Share your findings, insights, and methodologies through research papers, presentations, or open-source software.
By actively engaging in research, collaboration, and knowledge dissemination, you can contribute to the advancement of network-based diffusion analysis and make a meaningful impact in understanding and shaping diffusion processes in networks.
# 8. Network-based Diffusion Analysis in Practice
8.1. Data Collection and Preparation
Before conducting a diffusion analysis, it is important to collect and prepare the data. This involves identifying the relevant network data and gathering information on the diffusion process. Here are some key considerations:
- Identify the network: Determine the network of interest, whether it is a social network, communication network, or any other type of network relevant to the diffusion process.
- Gather network data: Collect data on the nodes (individuals or entities) and the edges (relationships or interactions) that form the network. This can be done through surveys, interviews, or data scraping from online platforms.
- Define the diffusion process: Specify the behavior or information that is spreading through the network. This could be the adoption of a new technology, the spread of a rumor, or the diffusion of a social norm.
- Collect diffusion data: Gather data on the diffusion process, such as the timing of adoptions or the sequence of events. This can be obtained through surveys, observations, or data mining techniques.
Once the data is collected, it needs to be prepared for analysis. This may involve cleaning the data, transforming it into a suitable format, and aggregating or summarizing the information. It is important to ensure the data is accurate, complete, and representative of the diffusion process.
8.2. Network Visualization and Analysis Tools
To analyze and visualize the network data, various tools and software can be used. These tools provide functionalities for exploring the network structure, identifying key nodes or communities, and visualizing the diffusion process. Here are some commonly used tools:
- Gephi: An open-source software for visualizing and exploring networks. It provides various layout algorithms, statistical measures, and interactive visualization options.
- NetworkX: A Python library for the creation, manipulation, and analysis of the structure, dynamics, and functions of complex networks. It includes algorithms for network analysis, community detection, and diffusion modeling.
- Cytoscape: A platform for visualizing and analyzing complex networks. It offers a wide range of network analysis and visualization features, including layout algorithms, clustering methods, and network integration.
- R packages: R, a statistical programming language, has several packages for network analysis and visualization, such as igraph, statnet, and visNetwork.
These tools can help researchers and analysts gain insights into the network structure, identify influential nodes or communities, and visualize the diffusion process in an intuitive and interactive manner.
8.3. Modeling and Simulation Techniques
Modeling and simulation techniques are essential for studying diffusion processes in networks. They allow researchers to understand the dynamics of the diffusion, predict its spread, and evaluate the impact of different factors. Here are some commonly used techniques:
- Agent-based models: These models simulate the behavior of individual agents (nodes) and their interactions in a network. They capture the decision-making process, the influence of neighbors, and the spread of information or behaviors.
- Stochastic models: These models incorporate randomness and uncertainty into the diffusion process. They use probability distributions to model the likelihood of adoption or transmission, taking into account factors such as node attributes, network structure, and external influences.
- Epidemic models: These models are commonly used to study the spread of diseases or infections in networks. They simulate the transmission dynamics, the susceptibility of individuals, and the effectiveness of interventions.
- Game-theoretic models: These models analyze the strategic interactions between nodes in a network. They capture the incentives, motivations, and decision-making processes that influence the diffusion process.
By using modeling and simulation techniques, researchers can gain insights into the mechanisms underlying diffusion, test different scenarios and interventions, and make predictions about the future spread of information or behaviors.
8.4. Case Studies of Network-based Diffusion Analysis
To illustrate the application of network-based diffusion analysis, let's explore some case studies:
- Case study 1: Social media influence
- Network: Twitter network of users
- Diffusion process: Spread of a hashtag campaign
- Analysis: Identify influential users, measure the reach and impact of the campaign, and evaluate the effectiveness of different strategies.
- Case study 2: Product adoption
- Network: Customer network of a company
- Diffusion process: Adoption of a new product
- Analysis: Identify key opinion leaders, predict the spread of adoption, and evaluate the impact of marketing strategies.
- Case study 3: Public health intervention
- Network: Contact network of individuals
- Diffusion process: Spread of a vaccination campaign
- Analysis: Identify influential individuals, model the spread of vaccination, and evaluate the effectiveness of different intervention strategies.
These case studies demonstrate how network-based diffusion analysis can be applied in different domains to gain insights, inform decision-making, and design effective interventions.
Now that you have learned about the practical aspects of network-based diffusion analysis, you are ready to apply these techniques in your own research or professional work. Good luck!
# 8.1. Data Collection and Preparation
Before conducting a diffusion analysis, it is important to collect and prepare the data. This involves identifying the relevant network data and gathering information on the diffusion process. Here are some key considerations:
- Identify the network: Determine the network of interest, whether it is a social network, communication network, or any other type of network relevant to the diffusion process.
- Gather network data: Collect data on the nodes (individuals or entities) and the edges (relationships or interactions) that form the network. This can be done through surveys, interviews, or data scraping from online platforms.
- Define the diffusion process: Specify the behavior or information that is spreading through the network. This could be the adoption of a new technology, the spread of a rumor, or the diffusion of a social norm.
- Collect diffusion data: Gather data on the diffusion process, such as the timing of adoptions or the sequence of events. This can be obtained through surveys, observations, or data mining techniques.
Once the data is collected, it needs to be prepared for analysis. This may involve cleaning the data, transforming it into a suitable format, and aggregating or summarizing the information. It is important to ensure the data is accurate, complete, and representative of the diffusion process.
For example, let's say we are interested in studying the diffusion of a new mobile app in a social network. We would first identify the social network of users who are connected to each other through friendships or interactions. We would then gather data on the users and their connections, such as their names, demographic information, and the strength of their relationships.
Next, we would define the diffusion process as the adoption of the new mobile app. We would collect data on the timing of app installations or the sequence of events related to the app's spread, such as user recommendations or promotional campaigns.
Finally, we would clean and transform the data into a format suitable for analysis. This may involve removing duplicate or incomplete records, standardizing variables, and aggregating data at different levels of analysis, such as by user, by time period, or by network cluster.
## Exercise
Think about a diffusion process that you are interested in studying. Identify the relevant network and the data that would need to be collected for analysis. Describe the diffusion process and the type of data that would be relevant.
### Solution
For example, if you are interested in studying the diffusion of a new health behavior in a school network, you would need to collect data on the students and their social connections. The diffusion process could be the adoption of the health behavior, such as eating a healthy lunch or participating in physical activity. The relevant data would include the students' names, demographic information, friendship connections, and information on their health behaviors, such as surveys or observations.
# 8.2. Network Visualization and Analysis Tools
Once the data is collected and prepared, it can be visualized and analyzed using various tools and software. Network visualization tools allow you to represent the network data graphically, making it easier to understand and explore the structure of the network. Network analysis tools provide algorithms and metrics for analyzing the network, such as identifying influential nodes or measuring network centrality.
There are several popular network visualization and analysis tools available, each with its own features and capabilities. Some of these tools include:
- Gephi: Gephi is a widely used open-source network visualization and analysis software. It provides a user-friendly interface for creating visualizations and offers a range of analysis algorithms, such as community detection and centrality measures.
- Cytoscape: Cytoscape is another popular open-source platform for visualizing and analyzing networks. It supports a wide range of network formats and offers a variety of plugins for advanced analysis and visualization.
- NetworkX: NetworkX is a Python library for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. It provides a wide range of algorithms and functions for network analysis, such as clustering and path finding.
- igraph: igraph is a popular open-source library for network analysis and visualization. It supports various programming languages, including Python, R, and C/C++, and provides a range of algorithms for community detection, centrality measures, and network generation.
These tools can be used to visualize and analyze the network data collected in the previous step. They allow you to explore the network structure, identify important nodes or clusters, and gain insights into the diffusion process.
For example, let's say we have collected data on the social network of a group of students in a school. We can use a network visualization tool like Gephi or Cytoscape to create a visual representation of the network, with nodes representing students and edges representing their social connections. This visualization can help us understand the overall structure of the network, such as the presence of clusters or influential nodes.
Once the network is visualized, we can use network analysis tools like NetworkX or igraph to analyze the network. We can calculate centrality measures, such as degree centrality or betweenness centrality, to identify the most important nodes in the network. We can also perform community detection algorithms to identify groups or clusters of students with similar social connections.
## Exercise
Research and identify a network visualization or analysis tool that you would like to use for your own network-based diffusion analysis. Describe the features and capabilities of the tool and explain why you think it would be useful for your analysis.
### Solution
For example, if you are interested in analyzing a large-scale social network, you might consider using Gephi. Gephi is a powerful network visualization and analysis tool that supports the exploration of large networks and provides a wide range of analysis algorithms. It has an intuitive user interface and allows for interactive exploration of the network data. Gephi's layout algorithms can help reveal the underlying structure of the network, and its community detection algorithms can identify groups or clusters of nodes. Overall, Gephi would be a useful tool for visualizing and analyzing complex social networks.
# 8.3. Modeling and Simulation Techniques
Modeling and simulation techniques are valuable tools for studying network-based diffusion analysis. These techniques allow us to simulate the spread of information or influence through a network and observe how it evolves over time. By creating models that capture the key dynamics of the diffusion process, we can gain insights into the factors that drive diffusion and make predictions about its future behavior.
There are several modeling and simulation techniques that can be used in network-based diffusion analysis:
- Agent-based modeling: Agent-based modeling involves representing individual agents in a network and specifying their behaviors and interactions. Each agent follows a set of rules or algorithms, and their interactions with other agents drive the diffusion process. This technique allows for the exploration of complex dynamics and the emergence of patterns in the diffusion process.
- Cellular automata: Cellular automata are discrete models that divide a space into cells and define rules for the state of each cell based on the states of its neighbors. In the context of network-based diffusion analysis, cellular automata can be used to model the spread of information or influence through a network by updating the state of each node based on the states of its neighboring nodes.
- Network-based models: Network-based models explicitly represent the structure of the network and its connections. These models can capture the influence of network topology on the diffusion process and allow for the analysis of network properties, such as centrality or community structure, on diffusion dynamics.
- Stochastic models: Stochastic models introduce randomness into the diffusion process. They capture the inherent uncertainty and variability in the diffusion process and allow for the exploration of different possible outcomes. Stochastic models can be used to estimate the probability of different diffusion scenarios and assess the robustness of the diffusion process to random events.
These modeling and simulation techniques can be used to study various aspects of network-based diffusion analysis, such as the spread of information, the adoption of innovations, or the contagion of behaviors. They provide a powerful framework for understanding and predicting the dynamics of diffusion in complex networks.
For example, let's say we want to study the spread of a new technology through a social network. We can use agent-based modeling to represent individual users in the network and simulate their adoption of the technology. Each user can have different characteristics, such as their susceptibility to influence or their level of awareness about the technology. By specifying rules for how users interact and influence each other, we can simulate the diffusion process and observe how the technology spreads through the network over time.
In another example, we can use cellular automata to model the spread of a rumor through a network of individuals. Each individual can be in one of two states - they either know the rumor or they don't. The state of each individual is updated based on the states of their neighbors, following a set of predefined rules. By simulating the diffusion process using cellular automata, we can observe how the rumor spreads through the network and identify key factors that influence its spread.
## Exercise
Choose one of the modeling and simulation techniques described above (agent-based modeling, cellular automata, network-based models, or stochastic models) and explain how it can be used to study a specific diffusion process of your choice. Describe the key components of the model and how it captures the dynamics of the diffusion process.
### Solution
For example, if you are interested in studying the adoption of a new product in a social network, you can use agent-based modeling. In this model, each individual in the network is represented as an agent with certain characteristics, such as their awareness of the product or their likelihood of adopting it. The agents interact with each other and influence each other's adoption decisions based on their characteristics and the characteristics of their neighbors. By simulating the diffusion process using agent-based modeling, you can observe how the adoption of the product spreads through the network and identify the key factors that drive its diffusion, such as the influence of influential individuals or the presence of social ties between individuals.
# 8.4. Case Studies of Network-based Diffusion Analysis
1. Case Study: Social Media and Information Diffusion
- Description: This case study focuses on the spread of information through social media platforms, such as Twitter or Facebook. By analyzing the network structure and user interactions on these platforms, we can gain insights into the mechanisms that drive the diffusion of information and identify influential users or communities.
- Key concepts: Network centrality, community detection, information cascades.
- Methodology: Collect data from social media platforms, construct the network, analyze network properties, simulate information diffusion processes, and evaluate the effectiveness of different strategies for maximizing the spread of information.
2. Case Study: Adoption of Renewable Energy Technologies
- Description: This case study examines the diffusion of renewable energy technologies, such as solar panels or wind turbines, in a community or region. By analyzing the social network of individuals or organizations involved in the adoption process, we can identify key influencers or opinion leaders and understand the factors that facilitate or hinder the adoption of these technologies.
- Key concepts: Network influence, social contagion, innovation diffusion theory.
- Methodology: Collect data on the social network of individuals or organizations involved in the adoption process, analyze network properties, identify influential nodes, simulate the adoption process using agent-based modeling, and evaluate the impact of different interventions or policies on the diffusion of renewable energy technologies.
3. Case Study: Spread of Infectious Diseases
- Description: This case study focuses on the spread of infectious diseases, such as COVID-19 or influenza, through a population. By analyzing the contact network between individuals and the transmission dynamics of the disease, we can assess the effectiveness of different control strategies, such as vaccination or social distancing measures.
- Key concepts: Network epidemiology, contact networks, epidemic models.
- Methodology: Collect data on the contact network between individuals, analyze network properties, simulate the spread of the disease using epidemic models, evaluate the impact of different control strategies on the disease spread, and make predictions about future outbreaks.
These case studies demonstrate the versatility and applicability of network-based diffusion analysis in various domains. By understanding the underlying network structure and dynamics, we can gain valuable insights into the diffusion processes and inform decision-making in fields such as public health, marketing, or social policy.
# 9. Ethical Considerations in Network Analysis
9.1. Privacy and Confidentiality
- Description: Privacy and confidentiality are essential considerations when working with network data. Researchers must ensure that the identities and personal information of individuals are protected and that data is stored and transmitted securely.
- Key concepts: Anonymization, informed consent, data protection.
- Guidelines: Obtain informed consent from participants, anonymize data to protect identities, implement data protection measures, and adhere to relevant privacy regulations.
9.2. Bias and Fairness in Data Collection and Analysis
- Description: Bias can arise in network analysis due to various factors, such as sampling methods, data collection techniques, or algorithmic biases. It is important to be aware of these biases and strive for fairness and inclusivity in data collection and analysis.
- Key concepts: Sampling bias, algorithmic bias, fairness.
- Guidelines: Use representative sampling methods, validate and test algorithms for biases, consider the potential impact of biases on the interpretation of results, and ensure fairness in data collection and analysis.
9.3. Ethical Implications of Network-based Diffusion Analysis
- Description: Network-based diffusion analysis can have ethical implications, particularly when studying the spread of information, behaviors, or influence. It is important to consider the potential consequences of research findings and ensure that they are used responsibly.
- Key concepts: Ethical implications, responsible use of findings.
- Guidelines: Consider the potential impact of research findings on individuals or communities, communicate findings responsibly, and engage in dialogue with stakeholders to address ethical concerns.
9.4. Guidelines and Best Practices
- Description: Various organizations and professional associations have developed guidelines and best practices for conducting ethical research in network analysis. These guidelines provide researchers with a framework for ensuring ethical conduct throughout the research process.
- Key concepts: Ethical guidelines, best practices.
- Guidelines: Familiarize yourself with relevant ethical guidelines, such as those provided by professional associations or institutional review boards, seek ethical review and approval for research projects, and adhere to the principles of transparency, integrity, and respect for individuals' rights.
By considering these ethical considerations and following best practices, researchers can conduct network analysis in a responsible and ethical manner, ensuring the protection of individuals' rights and privacy while generating valuable insights.
# 9.1. Privacy and Confidentiality
Privacy and confidentiality are essential considerations when working with network data. As researchers, we must ensure that the identities and personal information of individuals are protected and that data is stored and transmitted securely.
Anonymization is an important technique to protect the identities of individuals in network data. By removing or encrypting personally identifiable information, such as names or social security numbers, we can ensure that individuals cannot be identified from the data.
Obtaining informed consent from participants is another crucial step in protecting privacy. Participants should be fully informed about the purpose of the research, how their data will be used, and any potential risks or benefits. They should have the opportunity to voluntarily consent to participate and have the option to withdraw their consent at any time.
Data protection is also a key aspect of privacy and confidentiality. Researchers should implement appropriate measures to secure data, such as encryption, access controls, and secure storage. Data should be transmitted and stored in a manner that minimizes the risk of unauthorized access or disclosure.
It is important to adhere to relevant privacy regulations, such as the General Data Protection Regulation (GDPR) in the European Union or the Health Insurance Portability and Accountability Act (HIPAA) in the United States. These regulations provide guidelines and requirements for the collection, use, and protection of personal data.
By prioritizing privacy and confidentiality in network analysis, we can ensure that individuals' rights are respected and that data is handled responsibly. This not only protects individuals' privacy but also promotes trust and ethical conduct in research.
# 9.2. Bias and Fairness in Data Collection and Analysis
Bias and fairness are important considerations in data collection and analysis. Biases can arise at various stages of the research process, from the design of the study to the interpretation of the results. It is crucial to be aware of these biases and take steps to minimize their impact.
One common source of bias is sampling bias, where the sample used in the study is not representative of the population of interest. This can lead to inaccurate or misleading conclusions. To mitigate sampling bias, researchers should strive to obtain a diverse and representative sample that reflects the characteristics of the population.
Another type of bias is selection bias, which occurs when certain individuals or groups are systematically excluded or underrepresented in the data. This can skew the results and lead to unfair or discriminatory outcomes. Researchers should be mindful of potential sources of selection bias and take steps to minimize its impact, such as using random sampling techniques and ensuring equal opportunities for participation.
Bias can also arise from the measurement or operationalization of variables. For example, if a network analysis relies on self-reported data, there may be biases in how individuals perceive and report their relationships. Researchers should carefully consider the validity and reliability of their measures and take steps to minimize measurement bias.
Fairness is closely related to bias and refers to the equitable treatment of individuals or groups. In network analysis, fairness can be compromised if certain individuals or groups are disproportionately affected by the research or if the results perpetuate existing inequalities. Researchers should strive to promote fairness by considering the potential impact of their research on different stakeholders and taking steps to mitigate any negative consequences.
By being aware of biases and striving for fairness, researchers can ensure that their network analysis is objective, reliable, and ethically sound. This not only enhances the quality of the research but also promotes social justice and equity.
# 9.3. Ethical Implications of Network-based Diffusion Analysis
Network-based diffusion analysis raises several ethical considerations that researchers should be mindful of. These considerations involve the privacy and confidentiality of individuals, the potential for harm or unintended consequences, and the responsible use of data.
One ethical consideration is the privacy and confidentiality of individuals whose data is used in the analysis. Network analysis often involves collecting and analyzing personal information, such as social connections and communication patterns. Researchers should ensure that individuals' privacy is protected by obtaining informed consent, anonymizing data when possible, and securely storing and handling sensitive information.
Another ethical consideration is the potential for harm or unintended consequences. Network analysis can reveal sensitive or personal information that individuals may not want to be disclosed. Researchers should carefully consider the potential risks and benefits of their analysis and take steps to minimize harm, such as using appropriate data protection measures and ensuring that the results are used responsibly.
Additionally, researchers should consider the responsible use of data in network-based diffusion analysis. This includes using data for the intended purpose, being transparent about the data sources and analysis methods, and avoiding biases or discriminatory practices. Researchers should also consider the broader societal implications of their analysis and strive to promote fairness, social justice, and the well-being of individuals and communities.
Ethical guidelines and best practices can provide guidance for researchers in navigating these ethical considerations. These guidelines often emphasize the principles of respect for autonomy, beneficence, justice, and informed consent. Researchers should familiarize themselves with these guidelines and ensure that their network-based diffusion analysis adheres to ethical standards.
By considering these ethical implications, researchers can conduct network-based diffusion analysis in a responsible and ethical manner, ensuring that the benefits of the analysis outweigh any potential risks or harms.
# 9.4. Guidelines and Best Practices
To conduct network-based diffusion analysis in an ethical and responsible manner, researchers should follow guidelines and best practices that promote transparency, fairness, and the well-being of individuals and communities. These guidelines can help researchers navigate the ethical considerations involved in network analysis and ensure that their analysis is conducted in an ethical manner. Here are some key guidelines and best practices to consider:
1. Obtain informed consent: Before collecting and analyzing data, researchers should obtain informed consent from individuals whose data will be used. This involves providing clear and understandable information about the purpose of the analysis, how the data will be used, and any potential risks or benefits. Individuals should have the right to opt out or withdraw their consent at any time.
2. Protect privacy and confidentiality: Researchers should take steps to protect the privacy and confidentiality of individuals whose data is used in the analysis. This includes anonymizing data whenever possible, securely storing and handling sensitive information, and using appropriate data protection measures.
3. Use data for the intended purpose: Researchers should use data collected for network-based diffusion analysis only for the intended purpose and avoid using it for other purposes without obtaining additional consent. Data should be used in a way that respects the rights and interests of individuals and communities.
4. Be transparent about data sources and analysis methods: Researchers should be transparent about the sources of their data and the methods used in the analysis. This includes providing clear documentation of data collection procedures, analysis techniques, and any assumptions or limitations of the analysis.
5. Avoid biases and discriminatory practices: Researchers should strive to avoid biases and discriminatory practices in their network-based diffusion analysis. This includes being aware of potential biases in the data, using appropriate statistical methods to account for biases, and ensuring that the analysis does not perpetuate or reinforce existing inequalities or discrimination.
6. Consider broader societal implications: Researchers should consider the broader societal implications of their network-based diffusion analysis. This includes considering the potential impact of the analysis on individuals and communities, promoting fairness and social justice, and addressing any potential harms or unintended consequences.
By following these guidelines and best practices, researchers can conduct network-based diffusion analysis in an ethical and responsible manner, ensuring that the analysis benefits individuals and communities while minimizing any potential risks or harms.
# 10. Challenges and Future Directions
1. Data availability and quality: One of the main challenges in network-based diffusion analysis is the availability and quality of data. Obtaining large-scale, high-quality data on social networks and diffusion processes can be difficult. Future research should focus on developing new methods for data collection and improving data quality to ensure accurate and reliable analysis.
2. Modeling complex diffusion processes: Diffusion processes in real-world networks are often complex and influenced by various factors such as social influence, network structure, and individual characteristics. Future research should aim to develop more sophisticated models that can capture the complexity of diffusion processes and provide more accurate predictions and insights.
3. Scalability and efficiency: As the size of networks and the amount of available data continue to grow, scalability and efficiency become important considerations in network-based diffusion analysis. Future research should focus on developing scalable algorithms and techniques that can handle large-scale networks and big data efficiently.
4. Incorporating temporal dynamics: Many real-world diffusion processes unfold over time, and the dynamics of the network can have a significant impact on the spread of information or behaviors. Future research should aim to develop models and algorithms that can capture the temporal dynamics of diffusion processes and provide insights into how they evolve over time.
5. Ethical considerations: Network-based diffusion analysis raises important ethical considerations, such as privacy, fairness, and the potential for unintended consequences. Future research should focus on developing ethical guidelines and best practices for conducting network-based diffusion analysis and ensuring that the analysis is conducted in a responsible and ethical manner.
6. Interdisciplinary collaboration: Network-based diffusion analysis is a multidisciplinary field that can benefit from collaboration between researchers from different disciplines, such as computer science, sociology, and psychology. Future research should encourage interdisciplinary collaboration to foster innovation and advance our understanding of diffusion processes in networks.
Overall, network-based diffusion analysis is a dynamic and exciting field with many challenges and opportunities for future research. By addressing these challenges and exploring new directions, researchers can continue to advance our understanding of how information and behaviors spread in networks and develop practical applications in various domains.
# 10.1. Limitations of Network-based Diffusion Analysis
While network-based diffusion analysis has proven to be a powerful tool for understanding the spread of information and behaviors in networks, it also has its limitations. It's important to be aware of these limitations when conducting analysis and interpreting the results.
1. Simplified assumptions: Many diffusion models make simplified assumptions about the behavior of individuals and the structure of the network. For example, they may assume that individuals are completely rational and make decisions based solely on the information available to them. In reality, individuals may have complex motivations and may be influenced by factors that are not captured by the model. Additionally, the structure of real-world networks can be much more complex than the simple models used in analysis.
2. Data limitations: Network-based diffusion analysis relies on data on the structure of the network and the spread of information or behaviors. Obtaining accurate and complete data can be challenging, and there may be limitations in the data that can affect the analysis. For example, data may be missing or incomplete, or it may be biased in some way. It's important to carefully consider the quality and limitations of the data when conducting analysis.
3. Generalizability: The findings from network-based diffusion analysis may not always be generalizable to other contexts or populations. The dynamics of diffusion processes can vary across different networks and populations, and the results of one analysis may not necessarily apply to another context. It's important to consider the specific characteristics of the network and population under study when interpreting the results.
4. Causality: Network-based diffusion analysis can provide insights into the spread of information or behaviors in networks, but it does not necessarily establish causality. Correlation between network structure and diffusion patterns does not always imply causation. Other factors, such as individual characteristics or external events, may also play a role in the spread of information or behaviors.
5. Ethical considerations: Network-based diffusion analysis raises important ethical considerations, such as privacy and fairness. It's important to consider the potential impact of the analysis on individuals and communities and to ensure that the analysis is conducted in an ethical and responsible manner.
Despite these limitations, network-based diffusion analysis remains a valuable tool for understanding and predicting the spread of information and behaviors in networks. By carefully considering the limitations and conducting rigorous analysis, researchers can gain valuable insights into the dynamics of diffusion processes and their implications in various domains.
# 10.2. Emerging Trends and Technologies
1. Big data and machine learning: The availability of large-scale data and advances in machine learning techniques have the potential to revolutionize network-based diffusion analysis. Big data allows researchers to analyze massive amounts of data from various sources, such as social media platforms, and uncover patterns and insights that were previously inaccessible. Machine learning techniques can help automate the analysis process and identify complex patterns in the data.
2. Social media and online platforms: Social media platforms and online communities have become important sources of data for network-based diffusion analysis. These platforms provide rich data on social interactions, information sharing, and the spread of behaviors. Future research should focus on developing methods and techniques to analyze and model diffusion processes in these online environments.
3. Network visualization and analysis tools: The development of advanced network visualization and analysis tools has made it easier for researchers to explore and analyze complex networks. These tools allow researchers to visualize network structures, identify key nodes and communities, and analyze the dynamics of diffusion processes. Future research should focus on developing more user-friendly and interactive tools to facilitate network-based diffusion analysis.
4. Interdisciplinary approaches: Network-based diffusion analysis is a multidisciplinary field that can benefit from collaboration between researchers from different disciplines. By combining insights from computer science, sociology, psychology, and other fields, researchers can gain a more comprehensive understanding of diffusion processes in networks. Future research should encourage interdisciplinary collaboration and the exchange of ideas and methods.
5. Ethical considerations and responsible research practices: As network-based diffusion analysis becomes more widespread, it's important to address the ethical considerations and potential risks associated with the analysis. Researchers should adhere to responsible research practices, ensure the privacy and confidentiality of individuals, and consider the potential impact of the analysis on individuals and communities. Future research should focus on developing guidelines and best practices for conducting ethical and responsible network-based diffusion analysis.
Overall, the future of network-based diffusion analysis is promising, with emerging trends and technologies offering new opportunities for research and applications. By embracing these trends and technologies and addressing the associated challenges, researchers can continue to advance our understanding of how information and behaviors spread in networks and develop practical applications in various domains.
# 10.3. Future Applications and Impact
1. Public health: Network-based diffusion analysis can be applied to understand the spread of diseases and develop strategies for disease prevention and control. By analyzing the social networks of individuals and the spread of information and behaviors related to health, researchers can identify key nodes and communities that play a critical role in the spread of diseases. This information can be used to develop targeted interventions and strategies to reduce the spread of diseases.
2. Marketing and advertising: Network-based diffusion analysis can be used to understand the spread of information and behaviors related to products and services. By analyzing social networks and the spread of information and recommendations, marketers can identify influential individuals and communities that can help promote their products and services. This information can be used to develop targeted marketing campaigns and strategies to maximize the reach and impact of marketing efforts.
3. Social movements and activism: Network-based diffusion analysis can be applied to understand the spread of social movements and activism. By analyzing the social networks of activists and the spread of information and behaviors related to social change, researchers can identify key nodes and communities that are critical for mobilizing support and driving change. This information can be used to develop strategies and interventions to support social movements and activism.
4. Policy and decision making: Network-based diffusion analysis can provide valuable insights for policy makers and decision makers. By analyzing the spread of information and behaviors related to policy issues, researchers can identify key stakeholders and communities that are critical for driving change and influencing public opinion. This information can be used to develop evidence-based policies and strategies that are more likely to be effective and have a positive impact.
5. Human behavior and social dynamics: Network-based diffusion analysis can contribute to our understanding of human behavior and social dynamics. By analyzing the spread of information and behaviors in networks, researchers can uncover patterns and insights into how individuals make decisions, how behaviors spread, and how social norms are formed. This knowledge can help inform theories and models of human behavior and contribute to the development of interventions and strategies to promote positive social change.
Overall, network-based diffusion analysis has the potential to revolutionize various domains and applications by providing valuable insights into the spread of information and behaviors in networks. By leveraging the power of network analysis and combining it with other disciplines and technologies, researchers can continue to advance our understanding of diffusion processes and develop practical applications that have a positive impact on society.
# 10.4. Collaboration and Interdisciplinary Approaches
1. Collaboration between computer scientists and social scientists: Network-based diffusion analysis requires expertise in both computer science and social science. Computer scientists can contribute their knowledge of network analysis, data mining, and machine learning techniques, while social scientists can provide insights into human behavior, social dynamics, and the context in which diffusion processes occur. Collaboration between these two disciplines can lead to more comprehensive and accurate analysis.
2. Collaboration between academia and industry: Collaboration between academia and industry can help bridge the gap between theoretical research and practical applications. Industry partners can provide access to large-scale data sets and real-world problems, while academic researchers can contribute their expertise in network analysis and diffusion modeling. This collaboration can lead to the development of innovative solutions and tools that have practical applications in various domains.
3. Collaboration between different domains and fields: Network-based diffusion analysis can benefit from collaboration between researchers from different domains and fields. For example, collaborations between researchers in public health, marketing, sociology, and computer science can lead to new insights and approaches for understanding and predicting the spread of information and behaviors. By combining expertise from different fields, researchers can gain a more comprehensive understanding of diffusion processes and develop innovative solutions.
4. Collaboration between researchers and practitioners: Collaboration between researchers and practitioners is essential for ensuring that network-based diffusion analysis is relevant and applicable to real-world problems. Practitioners can provide valuable insights into the challenges and limitations of current approaches, while researchers can contribute their expertise in network analysis and modeling. This collaboration can lead to the development of practical tools and methods that can be used by practitioners to address real-world problems.
Overall, collaboration and interdisciplinary approaches are key to advancing the field of network-based diffusion analysis. By bringing together researchers from different disciplines and domains, we can leverage the strengths of each field and develop innovative solutions that have a real-world impact. Collaboration also fosters the exchange of ideas and knowledge, leading to new insights and approaches for understanding and predicting the spread of information and behaviors in networks.
# 11. Conclusion and Next Steps
In this textbook, we have covered the fundamentals of network analysis and explored various topics related to network-based diffusion analysis. We have learned about basic concepts and terminology, different types of networks, network measures and metrics, diffusion models, network influence and contagion, information spread in networks, network dynamics and evolution, diffusion in multiplex networks, diffusion in dynamic networks, and network-based diffusion analysis in practice.
We have also discussed ethical considerations in network analysis, challenges and future directions, and the importance of collaboration and interdisciplinary approaches in network-based diffusion analysis.
By studying this textbook, you have gained a deep understanding of network-based diffusion analysis and its applications in various domains. You have learned how to analyze and model diffusion processes in networks, identify influential nodes, predict information cascades, and analyze network dynamics and evolution.
Next, you can further explore the field of network-based diffusion analysis by conducting your own research, applying the concepts and techniques learned in this textbook to real-world problems, and collaborating with researchers and practitioners from different disciplines and domains.
Remember, network-based diffusion analysis is a rapidly evolving field, and there are always new challenges and opportunities to explore. By staying curious, keeping up with the latest research, and applying your knowledge and skills, you can contribute to the advancement of network-based diffusion analysis and make a meaningful impact in your chosen field.
Good luck on your journey in network-based diffusion analysis, and remember to always approach your work with rigor, engagement, and a practical mindset.
# 11.1. Recap of Key Concepts
Throughout this textbook, we have covered a wide range of key concepts in network-based diffusion analysis. Let's recap some of the most important ones:
- Network analysis: Network analysis is the study of relationships and interactions between entities, represented as nodes and edges in a network. It involves analyzing the structure, dynamics, and properties of networks.
- Diffusion models: Diffusion models are mathematical models that describe the spread of information, behavior, or influence through a network. They capture how individuals adopt or reject a new idea or behavior based on their connections to others in the network.
- Influence and contagion: Influence refers to the ability of one node to affect the behavior or opinions of other nodes in the network. Contagion refers to the spread of a behavior or idea through a network, often resulting from influence.
- Information spread: Information spread refers to the diffusion of information through a network. It can be studied to understand how information propagates, how it can be predicted, and the factors that affect its spread.
- Network dynamics and evolution: Network dynamics refers to the changes that occur in a network over time. Networks can grow, shrink, or reconfigure due to various factors. Understanding network dynamics is crucial for studying diffusion processes.
- Multiplex networks: Multiplex networks are networks that consist of multiple layers or types of connections between nodes. Diffusion in multiplex networks involves considering the interactions and dependencies between different layers.
- Dynamic networks: Dynamic networks are networks that change over time. Diffusion in dynamic networks involves studying how the spread of information or behavior evolves as the network structure changes.
- Network-based diffusion analysis in practice: Network-based diffusion analysis involves collecting and preparing data, visualizing and analyzing networks, and using modeling and simulation techniques to study diffusion processes. It also involves considering ethical considerations and following best practices in data collection and analysis.
These concepts provide a solid foundation for understanding and conducting network-based diffusion analysis. By applying these concepts and techniques, you can gain insights into how information, behavior, and influence spread through networks and make informed decisions in various domains.
# 11.2. Further Resources for Network-based Diffusion Analysis
If you're interested in diving deeper into the field of network-based diffusion analysis, there are several resources available that can provide additional insights and knowledge. Here are some recommended resources:
- Books:
- "Networks, Crowds, and Markets: Reasoning About a Highly Connected World" by David Easley and Jon Kleinberg
- "Diffusion of Innovations" by Everett M. Rogers
- "Social Network Analysis: Methods and Applications" by Stanley Wasserman and Katherine Faust
- "Analyzing Social Networks" by Stephen P. Borgatti, Martin G. Everett, and Jeffrey C. Johnson
- Research papers and articles:
- "The Spread of Behavior in an Online Social Network Experiment" by James H. Fowler and Nicholas A. Christakis
- "The Structure and Function of Complex Networks" by Mark E. J. Newman
- "Information Cascades in Social Media" by Sune Lehmann, Sharad Goel, and Daniele Quercia
- "Influence Maximization in Social Networks" by Wei Chen, Yajun Wang, and Siyu Yang
- Online courses and tutorials:
- Coursera: "Social and Economic Networks: Models and Analysis" by Matthew O. Jackson
- edX: "Network Science" by Albert-László Barabási
- DataCamp: "Introduction to Network Analysis in Python" by Eric Ma
These resources can provide a deeper understanding of network-based diffusion analysis, its theoretical foundations, and practical applications. They can also help you stay updated with the latest research and advancements in the field.
# 11.3. Applying Network-based Diffusion Analysis in Different Fields
Network-based diffusion analysis has applications in various fields and domains. Here are some examples of how it can be applied:
- Marketing and advertising: Network-based diffusion analysis can help marketers understand how information and influence spread through social networks, allowing them to design more effective advertising campaigns and target influential individuals.
- Public health: Network-based diffusion analysis can be used to study the spread of diseases, identify key individuals or groups that contribute to the spread, and develop strategies for disease prevention and control.
- Social movements and activism: Network-based diffusion analysis can provide insights into how social movements and activism spread through networks, helping activists understand how to mobilize support and create social change.
- Financial markets: Network-based diffusion analysis can be used to study the spread of information and behavior in financial markets, helping investors make informed decisions and understand market dynamics.
- Political science: Network-based diffusion analysis can help researchers understand how political opinions and behaviors spread through social networks, informing political campaigns and policy-making.
These are just a few examples of how network-based diffusion analysis can be applied in different fields. The principles and techniques learned in this textbook can be adapted and applied to various domains, allowing you to make valuable contributions in your chosen field.
# 11.4. Contributing to the Advancement of Network-based Diffusion Analysis
As you continue your journey in network-based diffusion analysis, you have the opportunity to contribute to the advancement of the field. Here are some ways you can make a meaningful impact:
- Conduct research: Identify gaps in current knowledge and conduct research to address those gaps. Design experiments, collect data, and develop new models and algorithms to improve our understanding of network-based diffusion analysis.
- Apply your knowledge: Apply the concepts and techniques learned in this textbook to real-world problems and domains. Use network-based diffusion analysis to solve practical challenges and make informed decisions.
- Collaborate with others: Network-based diffusion analysis is an interdisciplinary field that benefits from collaboration. Collaborate with researchers and practitioners from different disciplines, such as computer science, sociology, economics, and psychology, to combine expertise and tackle complex problems.
- Share your findings: Publish your research findings in academic journals, present at conferences, and share your insights with the wider community. By sharing your knowledge and findings, you can contribute to the collective understanding of network-based diffusion analysis.
- Mentor others: Share your knowledge and expertise with others who are interested in network-based diffusion analysis. Mentor students, participate in workshops and seminars, and contribute to educational resources to help others learn and grow in the field.
By actively engaging in these activities, you can contribute to the advancement of network-based diffusion analysis and make a lasting impact in the field. Remember to stay curious, keep learning, and approach your work with rigor and enthusiasm.
Congratulations on completing this textbook, and best of luck in your future endeavors in network-based diffusion analysis! | Textbooks |
home chevron_right Learning centerchevron_rightEnergy storage and conversionchevron_rightBatterychevron_rightQCM: Measurement principles
Topic 10 min read
QCM: Measurement principles
Latest updated: November 23, 2021
As previously described [1], the Quartz Crystal Microbalance is based on the fact that the resonant frequency of a quartz resonator changes when its thickness changes. Using the Sauerbey equation we can calculate a mass change from this frequency change. The conditions under which the Sauerbrey equation is applicable are described in a different topic [2].
There are several methods used to measure the resonant and monitor resonant frequency changes. The standard historical method using oscillator circuits is limited to resonant frequency measurements at the fundamental frequency whereas advanced systems using impedance analysis or ring-down provide dissipation and overtone measurements. A good overview of these methods is given at p. 23 in the book by D. Johannsmann [3].
In this article, we will only describe the impedance analysis method used in the BluQCM. Knowing the nominal resonant frequency of a given resonator, an impedance or rather an admittance measurement is performed by the BluQCM at frequencies around the nominal resonant frequency. Plotting the real part of the admittance also known as the susceptance as a function of the frequency, we obtain the curve shown in Fig. 1 on the right.
Adding a rigid layer, which is assumed to be elastic, to move as fast as the quartz i.e. to have the same mechanical properties as the quartz (same shear modulus and density), has the same effect as increasing the thickness of the crystal: the resonant frequency is shifted and the bandwidth, although not constant changes only slightly as it is shown in Fig. 2.
At the resonant frequency, the admittance is maximal, which means the amplitude of the quartz vibration is also maximal. It can also be seen that the peak has a certain width, showing the dispersion of the resonant frequency of the quartz. The factor $\Gamma$ is the half-width at half maximum also named half bandwidth.
The bandwidth is related to the elastic properties of the resonator as well as the medium in which the crystal is vibrating.
In Fig. 1, the effect of increasing a resonator thickness is illustrated: when the thickness increases the resonant frequency shifts to lower values, as is predicted by Eq. 3 in Quartz Crystal Microbalance: principles and history [1].
Figure 1: Typical resonance curves of two crystals of different thicknesses. $f_01$ and $f_02$
correspond to a sensor of thickness $d_1$ and $d_2$, respectively, with $d_1 \lt d_2$.
Figure 2: Typical resonance curves of (yellow) a clean sensor and (blue) a sensor with a thin rigid layer.
Note that the frequency shift $\Delta f$ is much larger than the dispersion shift $\Delta \Gamma$
As previously stated, the BluQCM measurement is an impedance or rather admittance measurement performed between the two electrodes of the resonator. The resonator being a piezoelectric material, the response of the system to an electrical modulation is not only electrical but also vibrational or acoustic. However, the system can be modeled by a simple equivalent circuit that is called the Butterworth-van Dyke (BvD) equivalent circuit [4, 5] (Fig. 3):
Figure 3: The Butterworth-van Dyke (BvD) equivalent circuit used to model a
quartz crystal resonator submitted to a sinusoidal electrical modulation.
The electrical components in the top branch that is called the motional branch have equivalents as motional components. The bottom branch is called the electrical branch. The Nyquist diagram of the admittance of such a circuit for typical parameters is given in Fig. 4a. The resonant frequency is the frequency for which $\mathrm{Im}(Y)=0$
Figure 4a: Nyquist diagram of the admittance of a BvD circuit[/caption]
Figure. 4b. Representation of the same impedance as $\mathrm{Re}(Y),\,\mathrm{-Im}(Y)\,vs.\,f$ for
typical parameters. One can see how to determine the resonant frequency and the half bandwidth.
The relationship between the resonant frequency and the half-bandwidth with the components of the BvD circuit are the following [6]:
$$f_{01}=\frac{1}{2 \pi \sqrt {L_1 C_1}}\tag{1}\label{eq1}$$
$$\Gamma = \frac{R_1}{4 \pi L_1}\tag{2}\label{eq2}$$
$\Gamma$ is a direct measurement of the resonant frequency dispersion in $\mathrm{Hz}$ and is directly related to the acoustic properties of the film. This parameter must be measured to be able to make sure that the Sauerbrey equation can be applied to determine mass change from frequency change. We will see these aspects in more detail [3].
Standard instruments usually only give the resonant frequency changes and the resistance R1, from which it is not easy to deduce the bandwidth or the dissipation $D$ without unit:
$$D = \frac{2\Gamma}{f_{01}}\tag{3}\label{eq3}$$
These measurements are performed at the fundamental resonant frequency but with the BluQCM can be performed at harmonic frequencies or overtones. Performing measurements at overtones lets users check the validity of the Sauerbrey equation [2]. Resonant frequencies and dissipation measurements at overtones allow also to characterize viscoelastic thin films, particles, molecules or even conformation. More details are given in [7].
Quartz Crystal Microbalance: principles and history
Quartz Crystal Microbalance: When is the Sauerbrey equation valid ?
Johannsmann, in "The quartz crystal microbalance in soft matter research", Springer, 2015
Butterworth, Proc. Phys. Soc. London 27, (1914) 410
K. Van Dyke, Proc. Inst. Radio Engin. 16 (1928) 742
T. Pauporté, D. Lincot, in : Microbalance à cristal de quartz, Techniques de l'Ingénieur, (2006) P 2 220.
Quartz Crystal Microbalance: Why measure at overtones ?
Quartz Crystal Microbalance impedance analyzer resonant frequency resonance curve dissipation
BluQCM QSD
The BluQCM QSD is a single channel, compact and modular instrument. Its low footprint and lightweight makes it particularly suitable for crowded labs. It is available as standalone, with temperature control or/and flow control.
Sensors and cells
Unique design for quick/easy resonator positioning. Optimized contact helps generate stable, reliable measurements. | CommonCrawl |
Only show content I have access to (72)
Only show open access (10)
Last 3 years (12)
Over 3 years (187)
Physics and Astronomy (57)
Materials Research (37)
Statistics and Probability (14)
Earth and Environmental Sciences (13)
Classical Studies (3)
Drama and Theatre (1)
MRS Online Proceedings Library Archive (36)
Epidemiology & Infection (13)
Proceedings of the Nutrition Society (8)
International Astronomical Union Colloquium (7)
The Journal of Agricultural Science (7)
The Journal of Laryngology & Otology (6)
Microscopy and Microanalysis (5)
Publications of the Astronomical Society of Australia (5)
Symposium - International Astronomical Union (5)
Geological Magazine (4)
Proceedings of the British Society of Animal Science (4)
The British Journal of Psychiatry (4)
Animal Science (3)
Infection Control & Hospital Epidemiology (3)
Mineralogical Magazine (3)
Psychological Medicine (3)
Transactions of the International Astronomical Union (3)
Journal of Fluid Mechanics (2)
Materials Research Society (38)
International Astronomical Union (21)
BSAS (9)
Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (6)
MiMi / EMAS - European Microbeam Analysis Society (5)
The Royal College of Psychiatrists (5)
Mineralogical Society (4)
Weed Science Society of America (4)
Society for Healthcare Epidemiology of America (SHEA) (3)
Society for Academic and Primary Care (2)
test society (2)
American Society of International Law (1)
Fauna & Flora International - Oryx (1)
Modern Language Association of America (1)
Royal Aeronautical Society (1)
Royal College of Speech and Language Therapists (1)
The Association for Asian Studies (1)
The Paleontological Society (1)
World Association for Disaster and Emergency Medicine (1)
The Cambridge Economic History of Europe (5)
Cambridge Child and Adolescent Psychiatry (1)
Cambridge Companions to Literature (1)
Cambridge Handbooks in Psychology (1)
Cambridge Studies in Biological and Evolutionary Anthropology (1)
Cambridge Histories (5)
Cambridge Histories - British & European History (5)
Cambridge Companions (1)
Cambridge Companions to Literature and Classics (1)
Cambridge Handbooks (1)
Cambridge Handbooks of Psychology (1)
Evaluation of Discrepancies in Carbapenem Minimum Inhibitory Concentrations Obtained at Clinical Laboratories Compared to a Public Health Laboratory
Julian E. Grass, Shelley S. Magill, Isaac See, Uzma Ansari, Lucy E. Wilson, Elisabeth Vaeth, Paula Snippes Vagnone, Brittany Pattee, Jesse T. Jacob, Georgia Emerging Infections Program, Chris Bower, Atlanta Veterans Affairs Medical Center, Foundation for Atlanta Veterans Education and Research, Sarah W. Satola, Sarah J. Janelle, Kyle Schutz, Rebecca Tsay, Marion A. Kainer, Daniel Muleta, P. Maureen Cassidy, Vivian H. Leung, Meghan Maloney, Erin C. Phipps, New Mexico Emerging Infections Program, Kristina G. Flores, New Mexico Emerging Infections Program, Erin Epson, Joelle Nadle, Maria Karlsson, Joseph D. Lutgring
Journal: Infection Control & Hospital Epidemiology / Volume 41 / Issue S1 / October 2020
Published online by Cambridge University Press: 02 November 2020, pp. s474-s476
Print publication: October 2020
Background: Automated testing instruments (ATIs) are commonly used by clinical microbiology laboratories to perform antimicrobial susceptibility testing (AST), whereas public health laboratories may use established reference methods such as broth microdilution (BMD). We investigated discrepancies in carbapenem minimum inhibitory concentrations (MICs) among Enterobacteriaceae tested by clinical laboratory ATIs and by reference BMD at the CDC. Methods: During 2016–2018, we conducted laboratory- and population-based surveillance for carbapenem-resistant Enterobacteriaceae (CRE) through the CDC Emerging Infections Program (EIP) sites (10 sites by 2018). We defined an incident case as the first isolation of Enterobacter spp (E. cloacae complex or E. aerogenes), Escherichia coli, Klebsiella pneumoniae, K. oxytoca, or K. variicola resistant to doripenem, ertapenem, imipenem, or meropenem from normally sterile sites or urine identified from a resident of the EIP catchment area in a 30-day period. Cases had isolates that were determined to be carbapenem-resistant by clinical laboratory ATI MICs (MicroScan, BD Phoenix, or VITEK 2) or by other methods, using current Clinical and Laboratory Standards Institute (CLSI) criteria. A convenience sample of these isolates was tested by reference BMD at the CDC according to CLSI guidelines. Results: Overall, 1,787 isolates from 112 clinical laboratories were tested by BMD at the CDC. Of these, clinical laboratory ATI MIC results were available for 1,638 (91.7%); 855 (52.2%) from 71 clinical laboratories did not confirm as CRE at the CDC. Nonconfirming isolates were tested on either a MicroScan (235 of 462; 50.9%), BD Phoenix (249 of 411; 60.6%), or VITEK 2 (371 of 765; 48.5%). Lack of confirmation was most common among E. coli (62.2% of E. coli isolates tested) and Enterobacter spp (61.4% of Enterobacter isolates tested) (Fig. 1A), and among isolates testing resistant to ertapenem by the clinical laboratory ATI (52.1%, Fig. 1B). Of the 1,388 isolates resistant to ertapenem in the clinical laboratory, 1,006 (72.5%) were resistant only to ertapenem. Of the 855 nonconfirming isolates, 638 (74.6%) were resistant only to ertapenem based on clinical laboratory ATI MICs. Conclusions: Nonconfirming isolates were widespread across laboratories and ATIs. Lack of confirmation was most common among E. coli and Enterobacter spp. Among nonconfirming isolates, most were resistant only to ertapenem. These findings may suggest that ATIs overcall resistance to ertapenem or that isolate transport and storage conditions affect ertapenem resistance. Further investigation into this lack of confirmation is needed, and CRE case identification in public health surveillance may need to account for this phenomenon.
Disclosures: None
Whole-Genome Sequencing Reveals Diversity of Carbapenem-Resistant Pseudomonas aeruginosa Collected Through the Emerging Infections Program
Richard Stanton, Jonathan Daniels, Erin Breaker, Davina Campbell, Joseph Lutgring, Maria Karlsson, Kyle Schutz, Jesse Jacob, Lucy Wilson, Elisabeth Vaeth, Linda Li, Ruth Lynfield, Erin C. Phipps, Emily Hancock, Ghinwa Dumyati, Rebecca Tsay, P. Maureen Cassidy, Jacquelyn Mounsey, Julian Grass, Maroya Walters, Alison Halpin
Background: Carbapenem-resistant Pseudomonas aeruginosa (CRPA) is a frequent cause of healthcare-associated infections (HAIs). The CDC Emerging Infections Program (EIP) conducted population and laboratory-based surveillance of CRPA in selected areas in 8 states from August 1, 2016, through July 31, 2018. We aimed to describe the molecular epidemiology and mechanisms of resistance of CRPA isolates collected through this surveillance. Methods: We defined a case as the first isolate of P. aeruginosa resistant to imipenem, meropenem, or doripenem from the lower respiratory tract, urine, wounds, or normally sterile sites identified from a resident of the EIP catchment area in a 30-day period; EIP sites submitted a systematic random sample of isolates to CDC for further characterization. Of 1,021 CRPA clinical isolates submitted, 707 have been sequenced to date using an Illumina MiSeq. Sequenced genomes were classified using the 7-gene multilocus sequence typing (MLST) scheme, and a core genome MLST (cgMLST) scheme was used to determine phylogeny. Antimicrobial resistance genes were identified using publicly available databases, and chromosomal mechanisms of carbapenem resistance were determined using previously validated genetic markers. Results: There were 189 sequence types (STs) among the 707 sequenced genomes (Fig. 1). The most frequently occurring were high-risk clones ST235 (8.5%) and ST298 (4.7%), which were found across all EIP sites. Carbapenemase genes were identified in 5 (<1%) isolates. Overall, 95.6% of the isolates had chromosomal mutations associated with carbapenem resistance: 93.2% had porinD-associated mutations that decrease membrane permeability to the drugs; 24.8% had mutations associated with overexpression of the multidrug efflux pump MexAB-OprM; and 22.9% had mutations associated with overexpression of the endogenous β-lactamase ampC. More than 1 such chromosomal resistance mutation type was present in 37.8% of the isolates. Conclusions: The diversity of the sequence types demonstrates that HAIs caused by CRPA can arise from a variety of strains and that high-risk clones are broadly disseminated across the EIP sites but are a minority of CRPA strains overall. Carbapenem resistance in P. aeruginosa was predominantly driven by chromosomal mutations rather than acquired mechanisms (ie, carbapenemases). The diversity of the CRPA isolates and the lack of carbapenemase genes suggest that this ubiquitous pathogen can readily evolve chromosomal resistance mechanisms, but unlike carbapenemases, these cannot be easily spread through horizontal transfer.
On-farm risk factors associated with Leptospira shedding in New Zealand dairy cattle
Y. Yupiana, E. Vallée, P. Wilson, J. F. Weston, J. Benschop, J. Collins-Emerson, C. Heuer
Journal: Epidemiology & Infection / Volume 148 / 2020
Published online by Cambridge University Press: 18 May 2020, e219
This study aimed to evaluate risk factors associated with shedding of pathogenic Leptospira species in urine at animal and herd levels. In total, 200 dairy farms were randomly selected from the DairyNZ database. Urine samples were taken from 20 lactating, clinically normal cows in each herd between January and April 2016 and tested by real-time polymerase chain reaction (PCR) using gyrB as the target gene. Overall, 26.5% of 200 farms had at least one PCR positive cow and 2.4% of 4000 cows were shedding Leptospira in the urine. Using a questionnaire, information about risk factors at cow and farm level was collected via face-to-face interviews with farm owners and managers. Animals on all but one farm had been vaccinated against Hardjo and Pomona and cows on 54 of 200 (27%) farms had also been vaccinated against Copenhageni in at least one age group (calves, heifers and cows). Associations found to be statistically significant in univariate analysis (at P < 0.2) were assessed by multivariable logistic regression. Factors associated with shedding included cattle age (Odds ratio (OR) 0.82, 95% CI 0.71–0.95), keeping sheep (OR 5.57, 95% confidence interval (CI) 1.46–21.25) or dogs (OR 1.45, 95% CI 1.07–1.97) and managing milking cows in a single as opposed to multiple groups (OR 0.45, 95% CI 0.20–0.99). We conclude that younger cattle were more likely to be shedding Leptospira than older cattle and that the presence of sheep and dogs was associated with an increased risk of shedding in cows. Larger herds were at higher risk of having Leptospira shedders. However, none of the environmental risk factors that were assessed (e.g. access to standing water, drinking-water source), or wildlife abundance on-farm, or pasture were associated with shedding, possibly due to low statistical power, given the low overall shedding rate.
Pharmaco-Economics of Rapid Tranquillisation
C.E. Hyde, C. Harrower-Wilson, P. Ash
Journal: European Psychiatry / Volume 12 / Issue S2 / 1997
Published online by Cambridge University Press: 16 April 2020, p. 201s
Soft X-Ray and Cathodoluminescence Examination of a Tanzanian Graphite Deposit
Colin M. MacRae, Mark A. Pearce, Nicholas C. Wilson, Aaron Torpy, Matthew A. Glenn, Salvy P. Russo
Journal: Microscopy and Microanalysis / Volume 26 / Issue 4 / August 2020
Published online by Cambridge University Press: 06 April 2020, pp. 814-820
Hyperspectral soft X-ray emission (SXE) and cathodoluminescence (CL) spectrometry have been used to investigate a carbonaceous-rich geological deposit to understand the crystallinity and morphology of the carbon and the associated quartz. Panchromatic CL maps show both the growth of the quartz and the evidence of recrystallization. A fitted CL map reveals the distribution of Ti4+ within the grains and shows subtle growth zoning, together with radiation halos from 238U decay. The sensitivity of the SXE spectrometer to carbon, together with the anisotropic X-ray emission from highly orientated pyrolytic graphite, has enabled the C Kα peak shape to be used to measure the crystal orientation of individual graphite regions. Mapping has revealed that most grains are predominantly of a single orientation, and a number of graphite grains have been investigated to demonstrate the application of this new SXE technique. A peak fitting approach to analyzing the SXE spectra was developed to project the C Kα 2pz and 2p(x+y) orbital components of the graphite. The shape of these two end-member components is comparable to those produced by electron density of states calculations. The angular sensitivity of the SXE spectrometer has been shown to be comparable to that of electron backscatter diffraction.
Low FODMAP diet & prebiotic β-galactooligosaccharides improve irritable bowel syndrome and response to low FODMAP is predicted by urine and faecal metabolites: a randomised controlled trial
B. Wilson, M. Rossi, T. Kanno, R. Hough, C. Probert, G. Parkes, S. Anderson, P. Irving, A.J. Mason, M.C. Lomer, K. Whelan
Journal: Proceedings of the Nutrition Society / Volume 79 / Issue OCE1 / 2020
Published online by Cambridge University Press: 22 January 2020, E19
Print publication: 2020
Prebiotic β-galacto-oligosaccharide impact on clinical, inflammatory and microbiota outcomes in active ulcerative colitis: an open-label study
B. Wilson, M. Rossi, O. Eyice, M. C. Lomer, P. M. Irving, J. O. Lindsay, K. Whelan
Impact of a Central-Line Insertion Site Assessment (CLISA) score on localized insertion site infection to prevent central-line–associated bloodstream infection (CLABSI)
Shruti K. Gohil, Jennifer Yim, Kathleen Quan, Maurice Espinoza, Deborah J. Thompson, Allen P. Kong, Bardia Bahadori, Tom Tjoa, Chris Paiji, Scott Rudkin, Syma Rashid, Suzie S. Hong, Linda Dickey, Mohamad N. Alsharif, William C. Wilson, Alpesh N. Amin, Justin Chang, Usme Khusbu, Susan S. Huang
Journal: Infection Control & Hospital Epidemiology / Volume 41 / Issue 1 / January 2020
Published online by Cambridge University Press: 08 November 2019, pp. 59-66
Print publication: January 2020
To assess the impact of a newly developed Central-Line Insertion Site Assessment (CLISA) score on the incidence of local inflammation or infection for CLABSI prevention.
A pre- and postintervention, quasi-experimental quality improvement study.
Setting and participants:
Adult inpatients with central venous catheters (CVCs) hospitalized in an intensive care unit or oncology ward at a large academic medical center.
We evaluated CLISA score impact on insertion site inflammation and infection (CLISA score of 2 or 3) incidence in the baseline period (June 2014–January 2015) and the intervention period (April 2015–October 2017) using interrupted times series and generalized linear mixed-effects multivariable analyses. These were run separately for days-to-line removal from identification of a CLISA score of 2 or 3. CLISA score interrater reliability and photo quiz results were evaluated.
Among 6,957 CVCs assessed 40,846 times, percentage of lines with CLISA score of 2 or 3 in the baseline and intervention periods decreased by 78.2% (from 22.0% to 4.7%), with a significant immediate decrease in the time-series analysis (P < .001). According to the multivariable regression, the intervention was associated with lower percentage of lines with a CLISA score of 2 or 3, after adjusting for age, gender, CVC body location, and hospital unit (odds ratio, 0.15; 95% confidence interval, 0.06–0.34; P < .001). According to the multivariate regression, days to removal of lines with CLISA score of 2 or 3 was 3.19 days faster after the intervention (P < .001). Also, line dwell time decreased 37.1% from a mean of 14 days (standard deviation [SD], 10.6) to 8.8 days (SD, 9.0) (P < .001). Device utilization ratios decreased 9% from 0.64 (SD, 0.08) to 0.58 (SD, 0.06) (P = .039).
The CLISA score creates a common language for assessing line infection risk and successfully promotes high compliance with best practices in timely line removal.
Decreasing case fatality rate following invasive pneumococcal disease, North East England, 2006–2016
C. Houseman, K. E. Chapman, P. Manley, R. Gorton, D. Wilson, G. J. Hughes
Published online by Cambridge University Press: 15 April 2019, e175
Declining mortality following invasive pneumococcal disease (IPD) has been observed concurrent with a reduced incidence due to effective pneumococcal conjugate vaccines. However, with IPD now increasing due to serotype replacement, we undertook a statistical analysis to estimate the trend in all-cause 30-day case fatality rate (CFR) in the North East of England (NEE) following IPD. Clinical, microbiological and demographic data were obtained for all laboratory-confirmed IPD cases (April 2006–March 2016) and the adjusted association between CFR and epidemiological year estimated using logistic regression. Of the 2510 episodes of IPD included in the analysis, 486 died within 30 days of IPD (CFR 19%). Increasing age, male sex, a diagnosis of septicaemia, being in ⩾1 clinical risk groups, alcohol abuse and individual serotypes were independently associated with increased CFR. A significant decline in CFR over time was observed following adjustment for these significant predictors (adjusted odds ratio 0.93, 95% confidence interval 0.89–0.98; P = 0.003). A small but significant decline in 30-day all-cause CFR following IPD has been observed in the NEE. Nonetheless, certain population groups remain at increased risk of dying following IPD. Despite the introduction of effective vaccines, further strategies to reduce the ongoing burden of mortality from IPD are needed.
Role of magnetic field evolution on filamentary structure formation in intense laser–foil interactions
HPL_EP HEDP and High Power Laser 2018
M. King, N. M. H. Butler, R. Wilson, R. Capdessus, R. J. Gray, H. W. Powell, R. J. Dance, H. Padda, B. Gonzalez-Izquierdo, D. R. Rusby, N. P. Dover, G. S. Hicks, O. C. Ettlinger, C. Scullion, D. C. Carroll, Z. Najmudin, M. Borghesi, D. Neely, P. McKenna
Journal: High Power Laser Science and Engineering / Volume 7 / 2019
Published online by Cambridge University Press: 13 March 2019, e14
Filamentary structures can form within the beam of protons accelerated during the interaction of an intense laser pulse with an ultrathin foil target. Such behaviour is shown to be dependent upon the formation time of quasi-static magnetic field structures throughout the target volume and the extent of the rear surface proton expansion over the same period. This is observed via both numerical and experimental investigations. By controlling the intensity profile of the laser drive, via the use of two temporally separated pulses, both the initial rear surface proton expansion and magnetic field formation time can be varied, resulting in modification to the degree of filamentary structure present within the laser-driven proton beam.
Coherence Branch at I13, DLS: The Multiscale, Multimodal, Ptycho-tomographic End Station
D. Batey, S. Cipiccia, X. Shi, S. Williams, K. Wanelik, A. Wilson, S. Pérez-Tamarit, P. Cimavilla, M. A. Ródriguez-Pérez, C. Rau
Journal: Microscopy and Microanalysis / Volume 24 / Issue S2 / August 2018
Published online by Cambridge University Press: 10 August 2018, pp. 40-41
The prevalence and treatment outcomes of antineuronal antibody-positive patients admitted with first episode of psychosis
BJPsych Open Highlight Articles
James G. Scott, David Gillis, Alex E. Ryan, Hethal Hargovan, Nagaraj Gundarpi, Gemma McKeon, Sean Hatherill, Martin P. Newman, Peter Parry, Kerri Prain, Sue Patterson, Richard C. W. Wong, Robert J. Wilson, Stefan Blum
Journal: BJPsych Open / Volume 4 / Issue 2 / March 2018
Antineuronal antibodies are associated with psychosis, although their clinical significance in first episode of psychosis (FEP) is undetermined.
To examine all patients admitted for treatment of FEP for antineuronal antibodies and describe clinical presentations and treatment outcomes in those who were antibody positive.
Individuals admitted for FEP to six mental health units in Queensland, Australia, were prospectively tested for serum antineuronal antibodies. Antibody-positive patients were referred for neurological and immunological assessment and therapy.
Of 113 consenting participants, six had antineuronal antibodies (anti-N-methyl-D-aspartate receptor antibodies [n = 4], voltage-gated potassium channel antibodies [n = 1] and antibodies against uncharacterised antigen [n = 1]). Five received immunotherapy, which prompted resolution of psychosis in four.
A small subgroup of patients admitted to hospital with FEP have antineuronal antibodies detectable in serum and are responsive to immunotherapy. Early diagnosis and treatment is critical to optimise recovery.
The UTMOST: A Hybrid Digital Signal Processor Transforms the Molonglo Observatory Synthesis Telescope
M. Bailes, A. Jameson, C. Flynn, T. Bateman, E. D. Barr, S. Bhandari, J. D. Bunton, M. Caleb, D. Campbell-Wilson, W. Farah, B. Gaensler, A. J. Green, R. W. Hunstead, F. Jankowski, E. F. Keane, V. Venkatraman Krishnan, Tara Murphy, M. O'Neill, S. Osłowski, A. Parthasarathy, V. Ravi, P. Rosado, D. Temby
Journal: Publications of the Astronomical Society of Australia / Volume 34 / 2017
Published online by Cambridge University Press: 13 October 2017, e045
The Molonglo Observatory Synthesis Telescope (MOST) is an 18000 m2 radio telescope located 40 km from Canberra, Australia. Its operating band (820–851 MHz) is partly allocated to telecommunications, making radio astronomy challenging. We describe how the deployment of new digital receivers, Field Programmable Gate Array-based filterbanks, and server-class computers equipped with 43 Graphics Processing Units, has transformed the telescope into a versatile new instrument (UTMOST) for studying the radio sky on millisecond timescales. UTMOST has 10 times the bandwidth and double the field of view compared to the MOST, and voltage record and playback capability has facilitated rapid implementaton of many new observing modes, most of which operate commensally. UTMOST can simultaneously excise interference, make maps, coherently dedisperse pulsars, and perform real-time searches of coherent fan-beams for dispersed single pulses. UTMOST operates as a robotic facility, deciding how to efficiently target pulsars and how long to stay on source via real-time pulsar folding, while searching for single pulse events. Regular timing of over 300 pulsars has yielded seven pulsar glitches and three Fast Radio Bursts during commissioning. UTMOST demonstrates that if sufficient signal processing is applied to voltage streams, innovative science remains possible even in hostile radio frequency environments.
First interferometric detections of Fast Radio Bursts
M. Caleb, C. Flynn, M. Bailes, E. D. Barr, T. Bateman, S. Bhandari, D. Campbell-Wilson, W. Farah, A. J. Green, R. W. Hunstead, A. Jameson, F. Jankowski, E. F. Keane, A. Parthasarathy, V. Ravi, P. A. Rosado, W. van Straten, V. Venkatraman Krishnan
Journal: Proceedings of the International Astronomical Union / Volume 13 / Issue S337 / September 2017
Published online by Cambridge University Press: 04 June 2018, pp. 322-323
Print publication: September 2017
The class of radio transients called Fast Radio Bursts (FRBs) encompasses enigmatic single pulses, each unique in its own way, hindering a consensus for their origin. The key to demystifying FRBs lies in discovering many of them in order to identity commonalities – and in real time, in order to find potential counterparts at other wavelengths. The recently upgraded UTMOST in Australia, is undergoing a backend transformation to rise as a fast transient detection machine. The first interferometric detections of FRBs with UTMOST, place their origin beyond the near-field region of the telescope thus ruling out local sources of interference as a possible origin. We have localised these bursts to much better than the ones discovered at the Parkes radio telescope and have plans to upgrade UTMOST to be capable of much better localisation still.
Exploratory factor analysis and reliability of the Primary Health Care Engagement (PHCE) Scale in rural and remote nurses: findings from a national survey
Julie G. Kosteniuk, Norma J. Stewart, Chandima P. Karunanayake, Erin C. Wilson, Kelly L. Penz, Judith C. Kulig, Kelley Kilpatrick, Ruth Martin-Misener, Debra G. Morgan, Martha L.P. MacLeod
Journal: Primary Health Care Research & Development / Volume 18 / Issue 6 / November 2017
Published online by Cambridge University Press: 27 July 2017, pp. 608-622
The study purpose was to provide evidence of validity for the Primary Health Care Engagement (PHCE) Scale, based on exploratory factor analysis and reliability findings from a large national survey of regulated nurses residing and working in rural and remote Canadian communities.
There are currently no published provider-level instruments to adequately assess delivery of community-based primary health care, relevant to ongoing primary health care (PHC) reform strategies across Canada and elsewhere. The PHCE Scale reflects a contemporary approach that emphasizes community-oriented and community-based elements of PHC delivery.
Data from the pan-Canadian Nursing Practice in Rural and Remote Canada II (RRNII) survey were used to conduct an exploratory factor analysis and evaluate the internal consistency reliability of the final PHCE Scale.
The RRNII survey sample included 1587 registered nurses, nurse practitioners, licensed practical nurses, and registered psychiatric nurses residing and working in rural and remote Canada. Exploratory factor analysis identified an eight-factor structure across 28 items overall, and good internal consistency reliability was indicated by an α estimate of 0.89 for the final scale. The final 28-item PHCE Scale includes three of four elements in a contemporary approach to PHC (accessibility/availability, community participation, and intersectoral team) and most community-oriented/based elements of PHC (interdisciplinary collaboration, person-centred, continuity, population orientation, and quality improvement). We recommend additional psychometric testing in a range of health care providers and settings, as the PHCE Scale shows promise as a tool for health care planners and researchers to test interventions and track progress in primary health care reform.
Capacity building for conservation: problems and potential solutions for sub-Saharan Africa
M. J. O'Connell, O. Nasirwa, M. Carter, K. H. Farmer, M. Appleton, J. Arinaitwe, P. Bhanderi, G. Chimwaza, J. Copsey, J. Dodoo, A. Duthie, M. Gachanja, N. Hunter, B. Karanja, H. M. Komu, V. Kosgei, A. Kuria, C. Magero, M. Manten, P. Mugo, E. Müller, J. Mulonga, L. Niskanen, J. Nzilani, M. Otieno, N. Owen, J. Owuor, S. Paterson, S. Regnaut, R. Rono, J. Ruhiu, J. Theuri Njoka, L. Waruingi, B. Waswala Olewe, E. Wilson
Journal: Oryx / Volume 53 / Issue 2 / April 2019
Print publication: April 2019
To achieve their conservation goals individuals, communities and organizations need to acquire a diversity of skills, knowledge and information (i.e. capacity). Despite current efforts to build and maintain appropriate levels of conservation capacity, it has been recognized that there will need to be a significant scaling-up of these activities in sub-Saharan Africa. This is because of the rapid increase in the number and extent of environmental problems in the region. We present a range of socio-economic contexts relevant to four key areas of African conservation capacity building: protected area management, community engagement, effective leadership, and professional e-learning. Under these core themes, 39 specific recommendations are presented. These were derived from multi-stakeholder workshop discussions at an international conference held in Nairobi, Kenya, in 2015. At the meeting 185 delegates (practitioners, scientists, community groups and government agencies) represented 105 organizations from 24 African nations and eight non-African nations. The 39 recommendations constituted six broad types of suggested action: (1) the development of new methods, (2) the provision of capacity building resources (e.g. information or data), (3) the communication of ideas or examples of successful initiatives, (4) the implementation of new research or gap analyses, (5) the establishment of new structures within and between organizations, and (6) the development of new partnerships. A number of cross-cutting issues also emerged from the discussions: the need for a greater sense of urgency in developing capacity building activities; the need to develop novel capacity building methodologies; and the need to move away from one-size-fits-all approaches.
Conceptual design of initial opacity experiments on the national ignition facility
Solved and Unsolved problems in Plasma Physics
R. F. Heeter, J. E. Bailey, R. S. Craxton, B. G. DeVolder, E. S. Dodd, E. M. Garcia, E. J. Huffman, C. A. Iglesias, J. A. King, J. L. Kline, D. A. Liedahl, P. W. McKenty, Y. P. Opachich, G. A. Rochau, P. W. Ross, M. B. Schneider, M. E. Sherrill, B. G. Wilson, R. Zhang, T. S. Perry
Journal: Journal of Plasma Physics / Volume 83 / Issue 1 / February 2017
Published online by Cambridge University Press: 09 January 2017, 595830103
Accurate models of X-ray absorption and re-emission in partly stripped ions are necessary to calculate the structure of stars, the performance of hohlraums for inertial confinement fusion and many other systems in high-energy-density plasma physics. Despite theoretical progress, a persistent discrepancy exists with recent experiments at the Sandia Z facility studying iron in conditions characteristic of the solar radiative–convective transition region. The increased iron opacity measured at Z could help resolve a longstanding issue with the standard solar model, but requires a radical departure for opacity theory. To replicate the Z measurements, an opacity experiment has been designed for the National Facility (NIF). The design uses established techniques scaled to NIF. A laser-heated hohlraum will produce X-ray-heated uniform iron plasmas in local thermodynamic equilibrium (LTE) at temperatures ${\geqslant}150$ eV and electron densities ${\geqslant}7\times 10^{21}~\text{cm}^{-3}$ . The iron will be probed using continuum X-rays emitted in a ${\sim}200$ ps, ${\sim}200~\unicode[STIX]{x03BC}\text{m}$ diameter source from a 2 mm diameter polystyrene (CH) capsule implosion. In this design, $2/3$ of the NIF beams deliver 500 kJ to the ${\sim}6$ mm diameter hohlraum, and the remaining $1/3$ directly drive the CH capsule with 200 kJ. Calculations indicate this capsule backlighter should outshine the iron sample, delivering a point-projection transmission opacity measurement to a time-integrated X-ray spectrometer viewing down the hohlraum axis. Preliminary experiments to develop the backlighter and hohlraum are underway, informing simulated measurements to guide the final design.
The Australian Square Kilometre Array Pathfinder: Performance of the Boolardy Engineering Test Array
Australian SKA Pathfinder
D. McConnell, J. R. Allison, K. Bannister, M. E. Bell, H. E. Bignall, A. P. Chippendale, P. G. Edwards, L. Harvey-Smith, S. Hegarty, I. Heywood, A. W. Hotan, B. T. Indermuehle, E. Lenc, J. Marvil, A. Popping, W. Raja, J. E. Reynolds, R. J. Sault, P. Serra, M. A. Voronkov, M. Whiting, S. W. Amy, P. Axtens, L. Ball, T. J. Bateman, D. C.-J. Bock, R. Bolton, D. Brodrick, M. Brothers, A. J. Brown, J. D. Bunton, W. Cheng, T. Cornwell, D. DeBoer, I. Feain, R. Gough, N. Gupta, J. C. Guzman, G. A. Hampson, S. Hay, D. B. Hayman, S. Hoyle, B. Humphreys, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, J. Joseph, B. S. Koribalski, M. Leach, E. S. Lensson, A. MacLeod, S. Mackay, M. Marquarding, N. M. McClure-Griffiths, P. Mirtschin, D. Mitchell, S. Neuhold, A. Ng, R. Norris, S. Pearce, R. Y. Qiao, A. E. T. Schinckel, M. Shields, T. W. Shimwell, M. Storey, E. Troup, B. Turner, J. Tuthill, A. Tzioumis, R. M. Wark, T. Westmeier, C. Wilson, T. Wilson
Published online by Cambridge University Press: 09 September 2016, e042
We describe the performance of the Boolardy Engineering Test Array, the prototype for the Australian Square Kilometre Array Pathfinder telescope. Boolardy Engineering Test Array is the first aperture synthesis radio telescope to use phased array feed technology, giving it the ability to electronically form up to nine dual-polarisation beams. We report the methods developed for forming and measuring the beams, and the adaptations that have been made to the traditional calibration and imaging procedures in order to allow BETA to function as a multi-beam aperture synthesis telescope. We describe the commissioning of the instrument and present details of Boolardy Engineering Test Array's performance: sensitivity, beam characteristics, polarimetric properties, and image quality. We summarise the astronomical science that it has produced and draw lessons from operating Boolardy Engineering Test Array that will be relevant to the commissioning and operation of the final Australian Square Kilometre Array Path telescope.
Predictors of community-associated Staphylococcus aureus, methicillin-resistant and methicillin-susceptible Staphylococcus aureus skin and soft tissue infections in primary-care settings
G. C. LEE, R. G. HALL, N. K. BOYD, S. D. DALLAS, L. C. DU, L. B. TREVIÑO, C. RETZLOFF, S. B. TREVIÑO, K. A. LAWSON, J. P. WILSON, R. J. OLSEN, Y. WANG, C. R. FREI
Journal: Epidemiology & Infection / Volume 144 / Issue 15 / November 2016
Published online by Cambridge University Press: 04 August 2016, pp. 3198-3204
Skin and soft tissue infections (SSTIs) due to Staphylococcus aureus have become increasingly common in the outpatient setting; however, risk factors for differentiating methicillin-resistant S. aureus (MRSA) and methicillin-susceptible S. aureus (MSSA) SSTIs are needed to better inform antibiotic treatment decisions. We performed a case-case-control study within 14 primary-care clinics in South Texas from 2007 to 2015. Overall, 325 patients [S. aureus SSTI cases (case group 1, n = 175); MRSA SSTI cases (case group 2, n = 115); MSSA SSTI cases (case group 3, n = 60); uninfected control group (control, n = 150)] were evaluated. Each case group was compared to the control group, and then qualitatively contrasted to identify unique risk factors associated with S. aureus, MRSA, and MSSA SSTIs. Overall, prior SSTIs [adjusted odds ratio (aOR) 7·60, 95% confidence interval (CI) 3·31–17·45], male gender (aOR 1·74, 95% CI 1·06–2·85), and absence of healthcare occupation status (aOR 0·14, 95% CI 0·03–0·68) were independently associated with S. aureus SSTIs. The only unique risk factor for community-associated (CA)-MRSA SSTIs was a high body weight (⩾110 kg) (aOR 2·03, 95% CI 1·01–4·09).
Predicting the diagnosis of autism in adults using the Autism-Spectrum Quotient (AQ) questionnaire
K. L. Ashwood, N. Gillan, J. Horder, H. Hayward, E. Woodhouse, F. S. McEwen, J. Findon, H. Eklund, D. Spain, C. E. Wilson, T. Cadman, S. Young, V. Stoencheva, C. M. Murphy, D. Robertson, T. Charman, P. Bolton, K. Glaser, P. Asherson, E. Simonoff, D. G. Murphy
Journal: Psychological Medicine / Volume 46 / Issue 12 / September 2016
Published online by Cambridge University Press: 29 June 2016, pp. 2595-2604
Many adults with autism spectrum disorder (ASD) remain undiagnosed. Specialist assessment clinics enable the detection of these cases, but such services are often overstretched. It has been proposed that unnecessary referrals to these services could be reduced by prioritizing individuals who score highly on the Autism-Spectrum Quotient (AQ), a self-report questionnaire measure of autistic traits. However, the ability of the AQ to predict who will go on to receive a diagnosis of ASD in adults is unclear.
We studied 476 adults, seen consecutively at a national ASD diagnostic referral service for suspected ASD. We tested AQ scores as predictors of ASD diagnosis made by expert clinicians according to International Classification of Diseases (ICD)-10 criteria, informed by the Autism Diagnostic Observation Schedule-Generic (ADOS-G) and Autism Diagnostic Interview-Revised (ADI-R) assessments.
Of the participants, 73% received a clinical diagnosis of ASD. Self-report AQ scores did not significantly predict receipt of a diagnosis. While AQ scores provided high sensitivity of 0.77 [95% confidence interval (CI) 0.72–0.82] and positive predictive value of 0.76 (95% CI 0.70–0.80), the specificity of 0.29 (95% CI 0.20–0.38) and negative predictive value of 0.36 (95% CI 0.22–0.40) were low. Thus, 64% of those who scored below the AQ cut-off were 'false negatives' who did in fact have ASD. Co-morbidity data revealed that generalized anxiety disorder may 'mimic' ASD and inflate AQ scores, leading to false positives.
The AQ's utility for screening referrals was limited in this sample. Recommendations supporting the AQ's role in the assessment of adult ASD, e.g. UK NICE guidelines, may need to be reconsidered. | CommonCrawl |
Flow Resistance Modeling for Coolant Distribution within Canned Motor Cooling Loops | springerprofessional.de Skip to main content
vorheriger Artikel Kinematic and Dynamic Analysis of a 3-PRUS Spat...
nächster Artikel A Fast Multi-tasking Solution: NMF-Theoretic Co...
01.12.2020 | Original Article | Ausgabe 1/2020 Open Access
Flow Resistance Modeling for Coolant Distribution within Canned Motor Cooling Loops
Shengde Wang, Zhenqiang Yao, Hong Shen
Canned motor pumps have been used in third-generation nuclear power plants, such as the Westinghouse AP1000 advanced passive plant [ 1 ] and the SNPTC (State Nuclear Power Technology Corporation) CAP1400 advanced passive plant [ 2 ], to circulate primary reactor coolant throughout the reactor core. The level of security has been greatly improved by the elimination of the seal leakage risk brought about by changing the dynamic seal to a static seal when the pumping liquid is introduced into the inner clearance of the canned motor. The fluid in the gap forms an internal cooling system in the canned motor and circulates under the driving of the auxiliary impeller [ 3 ], as shown in Figure 1.
Inner circulation of the canned motor pump
The fluid in the motor clearance plays an important role in the cooling of windings and lubrication of bearings. The coolant flow distribution between the upper coil cooling loop and lower bearing lubricating loop depends on the flow resistance.
Since the area of the motor can and flywheel take up the main part of the pump, the dominant pattern of the inner coolant flow is Taylor–Couette–Poiseuille (TCP) flow, which is described as the axial liquid flow between two concentric cylinders with a rotating inner surface, as shown in the left portion of Figure 2. TCP flow is typically used for studying the stability theory of fluid dynamics, momentum transport, boundary layer heat transfer behavior, and the transition of flow regimes.
Schematics of TCP flow and the computational domain
The axial resistance of TCP flow is another research topic that is important in the design of canned motors since the annular gap decides the flow distribution in the upwards and downwards channels. Yamada made a theoretical formula to estimate the resistance of flow through an annulus with an inner rotating cylinder considering the assumed velocity distribution in the gap and validated it experimentally under limited hydraulic parametric conditions [ 4 ]. Nouri investigated the laminar and turbulent flow of Newtonian and non-Newtonian fluids in a concentric annulus with rotation of the inner cylinder [ 5 ]. He found that, when the dimensionless Rossby numbers were similar, the swirl velocity profiles were the same, and, therefore, the axial drag coefficients were also similar. Kim measured the pressure losses of different working fluids in TCP flow and drew the conclusion that the increase in flow disturbance caused by the Taylor vortex resulted in an increase in the skin friction coefficient, but he did not provide an in-depth explanation [ 6 – 8 ]. Kristiawan investigated the components of the wall shear rate via experiments and found that the axial distributions of the wall shear rate components averaged over the perimeter were similar to the distribution in steady Taylor vortices [ 9 ]. However, Kristiawan's tests were performed in slow axial flow, which cannot be introduced in the turbulent state. Huisman studied the velocity distribution in TCP flow by laser doppler anemometry and corrected the curvature effect via the use of a ray-tracer [ 10 ]. However, his research subject was flow with an outer cylinder rotation. Hashemian predicated the velocity profiles and frictional pressure losses in annular yield-power-law (YPL) fluid flow by numerical methods, and found that the effect of eccentricity was more significant to the reduction of the pressure loss than the radius ratio [ 11 ]. Aubert also investigated TCP flow experimentally through laser doppler velocity (LDV) measurements [ 12 ]. He put the emphasis on the velocity profiles and heat transfer characteristics, but failed to discuss the axial pressure losses. Yew Chuan measured the pressure loss in the rotor-stator gap [ 13 ] and presented the entry effect of air flow entering the annular gap with inner cylinder rotation [ 14 ]. Yew's study demonstrated that the flow resistance caused by rotation could be significant, and concluded that the friction factor of laminar flow was not affected by rotation. Sun Chao summarized measuring techniques for turbulent Taylor–Couette flow and provided an experimental point of view on high precision experimental setups [ 15 ]. However, the experiments in the review had no axial flow, which indicated that the TCP flow measurements were far from comprehensive and mature. Nouri-Borujerdi experimentally studied the friction factor and heat transfer behavior of TCP flow with a smooth and slotted surface. The results showed that the slot depth enhanced the friction factor, especially at higher effective Reynolds numbers [ 16 ], and optimal geometric parameters for the channel were recommended [ 17 ].
Most of the studies on TCP flow mentioned above used experimental methods, while numerical methods have been adopted with the progress of computer science in recent years. As early as 1998, Azouz made an evaluation of three turbulence models for turbulent flow in concentric annuli [ 18 ]. He observed that the one-layer mixing length model, two-layer mixing length model, and two-equation model performed similarly when simulating TCP flow. With the improvement of computer performance, direct numerical simulations (DNS) and large eddy simulations (LES) have become advanced methods for studying fluid dynamics with high precision. Rodolfo Ostilla-Mónico is a representative scholar that has performed DNS on Taylor Couette flow from the aspects of angular momentum transfer [ 19 ], boundary layer dynamics [ 20 ], radius ratio considerations [ 21 ], and the effects of domain size [ 22 ]. In addition, he provided flow features in detail. Rodolfo also explored the large-scale structure of Taylor–Couette turbulence through LES and regarded it as a useful tool for fast exploration to check for the presence of axially pinned large-scale structures [ 23 ]. Akihiro Ohsawa studied the through-flow effects of TCP flow with Reynolds numbers that varied from 500 to 8000 through LES, and found that the friction factor in the axial direction varied with the flow state [ 24 ]. Dhaval Paghdar investigated the effects of angular velocity and eccentricity on Taylor–Couette flow and revealed the conditions for the occurrence of Taylor vortices [ 25 ]. Though DNS and LES provide more accurate solutions in the time domain, their limitation is obvious; that is, the long calculation time required when the Reynolds number is large leads to less application in engineering design.
Compared with the DNS and LES, the Reynolds Average Navier-Stokes (RANS) method has advantages in computing efficiency and a wider range of dynamic parameters. Jacobs compared the results of RANS models with DNS and experiments and found that all models, except for the standard k-ε model, predicted mass flow rates that agreed with the experimental values, but failed to accurately predict the near-wall turbulence dissipation rate [ 26 ]. To get more accuracy from the shear stress transport (SST) k- ω model, Dhakal made a modified model, which produced superior performance, especially in the presence of rapid rotation or strong streamline curvature [ 27 , 28 ]. Neto also compared several RANS models in TCP flow research and indicated that the results of the RANS models were very close to each other. However, the SST k- ω model and the Reynolds stress model (RSM) showed slightly better predictions in some aspects [ 29 ]. David Shina modeled wide gap Taylor–Couette flow by using an implicit finite volume RANS scheme with a realizable - model [ 30 ] and validated the numerical method by experiments [ 31 , 32 ]. Through the above-mentioned studies, RANS models were proved appropriate for simulating TCP flow in a highly turbulent state. Among the RANS models, the SST k- ω and RSM models are the most accurate.
As previously mentioned, numerous studies on the Taylor–Couette system without axial flow between the two coaxial cylinders have been performed experimentally and numerically. However, when additional axial flow was present, existing discussions became inadequate, especially in highly turbulent states in both the axial and circular directions. The rotational speed of the Reactor Coolant Pump (RCP) in the CAP1400 was approximately 1450 r/min, and the peripheral velocity of the inner can was high due to the large inner radius of the annulus, which made the rotational Reynolds number reach up to 5.0 × 10 5. Furthermore, the mass flow rate of the water in the narrow canned motor gap was above 35 kg/s, which made the axial Reynolds number reach up to 4.2 × 10 4. The coupled turbulent flow puts forward a challenge in predicting the axial flow resistance of TCP flow.
In this study, the axial flow resistance characteristics of TCP flow in a large Reynolds number turbulent state in a canned motor was investigated via numerical simulation and experiments. A periodic calculation domain and SST k- ω turbulence model was applied in the numerical modeling, which was validated by Yamada's experiments [ 4 ], Aubert's measurements [ 12 ], and the designed experimental test. Using simulations and experiments, a simplified model was proposed to calculate the axial frictional coefficient accompanied by TCP flow. The investigation provides the basis for the inner cooling clearance design of the canned motor pump and helps to avoid unstable flow to ensure steady operation of the motor.
2 Flow Configuration and Numerical Methods
2.1 Geometry, Flow Governing Equations, and Boundary Conditions
In this study, the typical TCP flow style is considered as shown in Figure 2, in which the inner cylinder is the only rotating wall. The geometry of TCP flow is characterized by three parameters: the radius of the inner cylinder, \(r_{i}\), the radius of the outer cylinder, \(r_{o}\), and the axial length, \(L\). The inner wall has angular velocity, \(\omega\), and the flow in the gap can be described by the following parameters:
The characteristic dimension, \(d = r_{o} - r_{i}\), the radius ratio, \(\eta = {{r_{i} } \mathord{\left/ {\vphantom {{r_{i} } {r_{o} }}} \right. \kern-0pt} {r_{o} }}\), the axial Reynolds number, \(Re_{a} = {{\bar{U}_{a} d} \mathord{\left/ {\vphantom {{\bar{U}_{a} d} \nu }} \right. \kern-0pt} \nu }\), and the rotational Reynolds number, \(Re_{t} = {{r_{i} \omega d} \mathord{\left/ {\vphantom {{r_{i} \omega d} \nu }} \right. \kern-0pt} \nu }\), where \(\nu\) denotes the kinematic viscosity of the fluid, and \(\bar{U}_{a}\) represents the mean axial velocity.
The fluid in the annular gap was water, and the thermal properties of the water were fixed at 30 °C, which was the same as the experimental tests. The flow was considered to be steady-state and incompressible in the isothermal condition. The governing equations to model the flow were based on the Navier–Stokes equations, which are as follows:
Continuity equation:
$$\nabla \cdot \varvec{U} = 0.$$
Momentum equation:
$$\rho \left[ {\left( {\varvec{U} \cdot \nabla } \right)\varvec{U}} \right] = - \nabla P + \mu \nabla^{2} \varvec{U} + \varvec{f}.$$
In the equations, \(\rho\) represents the fluid density, \(P\) is the pressure, \(\varvec{U}\) is the velocity, and \(\varvec{f}\) is the source term.
According to the work of Rodolfo Ostilla-Mónico [ 22 ], the computational domain could be simplified by using a rotational periodic boundary condition, as shown in Figure 2, to reduce the calculation amount. The assumption of periodicity implied that the velocity components repeated themselves in either the axial or rotational direction. When considering the rotational direction, the pressure was also periodic, but in the axial direction, the pressure was not periodic. Instead, in the axial direction, the pressure drop between modules was periodic. The treatments of the axial pressure drop refers to the method offered by Ansys [ 33 ]. In this study, the angle, \(\alpha\), between two periodic sections was set to 15°. The inner wall was designated as the rotating wall, and the outer wall was designated as the stationary wall.
2.2 Turbulence Model and Solver Definitions
An efficient way to solve turbulence issues is to use the turbulent model. The most widely used turbulent models are generated from the Reynolds Average Navier–Stokes (RANS) equations, which introduce a Reynolds stress term as shown below:
$$\frac{{\partial \left( {\rho u_{i} } \right)}}{{\partial x_{i} }} = 0,$$
$$\frac{{\partial \left( {\rho u_{i} u_{j} } \right)}}{{\partial x_{j} }} = - \frac{\partial P}{{\partial x_{i} }} + \frac{\partial }{{\partial x_{j} }}\left[ {\mu \left( {\frac{{\partial u_{i} }}{{\partial x_{j} }} + \frac{{\partial u_{j} }}{{\partial x_{i} }} - \frac{2}{3}\delta_{ij} \frac{{\partial u_{l} }}{{x_{l} }}} \right)} \right] - \frac{{\partial \left( {\rho u^{\prime}_{i} \bar{u}^{\prime}_{j} } \right)}}{{\partial x_{j} }}.$$
Eq. ( 3) and Eq. ( 4) are called Reynolds Averaged Navier–Stokes (RANS) equations and have the same form as the instantaneous Navier-Stokes equations, but the velocities and other variables represent time-averaged values. The last term of Eq. ( 4) represents the effects of turbulence and the term \(\rho u^{\prime}_{i} \bar{u}^{\prime}_{j}\) must be modeled to close the equation. Two-equation turbulence models have the advantages of robustness, economy, and reasonable accuracy for a wide range of turbulent flows, which makes them popular in industrial flow and heat transfer simulations.
The SST k- ω model in Ansys-Fluent, which was proposed by Menter [ 34 ], was modified and applied to close the equation and simulate the flow dynamics of the TCP system. A limiter to the eddy-viscosity formulation was set in the model to obtain proper transport behavior. This limiter can be used to avoid over-prediction of the eddy-viscosity [ 26 ]. This feature also made the SST k- ω model more accurate and reliable for a wider class of flows, such as adverse pressure gradient flows, airfoils, and transonic shock waves. The works of Viera Neto, Jacobs, and Dhakal [ 26 – 29 ] mentioned in the introduction also showed the rationality of using the SST k- ω model.
A coupled scheme of pressure-velocity coupling was chosen for the study, and significantly improved the rate of solution convergence. PRESTO was adopted as the pressure scheme in the spatial discretization, and is an alternative second order scheme that is often useful when strong body forces exist. A third-order convection scheme conceived from the original MUSCL (Monotone Upstream-Centered Schemes for Conservation Laws) [ 35 ] was applied to the momentum, turbulent kinetic energy, and turbulent dissipation rate discretization. The least squares cell-based scheme was used in the gradient spatial discretization, which was less expensive and selected as the default gradient method in the Fluent solver.
2.3 Meshing and Grid Independent Analysis
The computational domain was created based on the experimental facility and meshed with the ICEM CFD software. The radii of the inner and outer cylinders were 76.5 mm and 79 mm, respectively. The gap thickness, d, was 2.5 mm, and the axial length, L, was 25 mm (i.e., ten times the d). The rotational speed of the inner cylinder was set to 1231.2 r/min, and the mass flow rate was 0.1 kg/s per circumferential period. The temperature of the water was set to 30 °C, the density, ρ, was 995.37 kg/m 3, and the dynamic viscosity, μ, was \(7.49 \times 10^{ - 6}\) Pa·s.
In the simulation, three main parameters were used to describe the grid density, namely the node numbers in the radial, circumferential, and axial directions, as shown in Figure 3. The value of y+ near the boundary wall was set to a value less than three to meet the requirements of the turbulent model. The grid inflation ratio in the radial direction was set to 1.1. Four sets of grids were designed to analyze the sensitivity. Table 1 shows the mesh parameters and iteration times. The calculations were performed on a computer with an Intel i7-6700k CPU, and 32 GB of RAM.
Mesh parameters and generation
Mesh parameters and iteration times
N c
N a
Time cost (min/2k steps)
The axial shear stress, τ, and the axial velocity distribution were chosen for the grid independence analysis. The axial shear stress, τ, is defined by Eq. ( 5):
$$\tau_{i/o} = \frac{{F_{ai/o} }}{{A_{i/o} }} .$$
In Eq. ( 5), F a is the axial force induced by the water flow at the wall, A is the surface area, and i/ o refers to the inner or outer cylinder.
The non-dimensional axial velocity is defined by Eq. ( 6):
$$U_{a}^{ + } = \frac{{U_{a} }}{{U_{\tau i/o} }} .$$
In Eq. ( 6), U a is the physical axial velocity, and U τ is the axial wall shear rate defined by Eq. ( 7):
$$U_{\tau i/o} = \sqrt {\frac{{\tau_{i/o} }}{\rho }} .$$
The non-dimensional wall distance, r +, is defined by Eq. ( 8):
$$r^{ + } = \left| {r - r_{i/o} } \right|\frac{{U_{\tau i/o} }}{\nu }.$$
In Eq. ( 8), r is the radial coordinate, and r i/o is the radius of the inner or outer cylinder.
The axial shear stresses of the four mesh sets in Table 1 were calculated with the results shown in Figure 4(a). It was indicated that the shear stress increased with the grid density until it reached a steady state. The axial velocity distributions along the radial gap are represented in Figure 4(b). From the partial enlargement of the curves, the results of mesh 3 and mesh 4 with high mesh density varied slightly. Considering both computational efficiency and accuracy, the third mesh density was adopted to simulate the flow.
Results of the different meshes: ( a) Axial shear stresses; ( b) Axial velocity distributions
3 Experimental Regime and Method
In order to identify the axial flow resistance in the gap flow of two concentric cylinders with a rotating inner wall, an inner flow test rig for canned motors was designed as shown in Figure 5. The experimental device consisted of two main parts, a rotor shaft and a shell, both made of stainless steel. The shaft was driven by a motor, and had a rotational speed ranging from 500 r/min to 1500 r/min. A tachometer was employed between the shaft and rotor to measure the motor revolutions. Ten pressure sensors were placed along the axial direction to measure the axial pressure drop. A pump was used to fill the equipment with water, adjust the flow rate, and maintain a relatively high pressure to prevent cavitation. A flowmeter installed in the pump export pipeline was used to measure the volumetric flow rate.
Experimental equipment and schematic
The axial flow frictional coefficient is defined by Eq. ( 9):
$$C_{f} = \frac{\text{d}P}{\text{d}L} \cdot \frac{d}{{\rho \bar{U}_{a}^{2} }} ,$$
where \(\bar{U}_{a}\) is the average axial velocity calculated by \({{Q_{V} } \mathord{\left/ {\vphantom {{Q_{V} } S}} \right. \kern-0pt} S}\), in which Q V is the volumetric flow rate and S is the cross-sectional area of the annulus. The axial distribution of static pressure on the outer cylinder was shown to be linear, which was in accordance with the work of Nouri [ 36 ]. The ratio \({{{\text{d}}P} \mathord{\left/ {\vphantom {{{\text{d}}P} {{\text{d}}L}}} \right. \kern-0pt} {{\text{d}}L}}\), which is the pressure loss along the axial direction, can be determined by Eq. ( 10) via experiments.
$$\frac{{{\text{d}}P}}{{{\text{d}}L}} = \frac{1}{2}\left( {\frac{{2P_{3} + P_{4} - 3P_{2} }}{4\Delta L} + \frac{{2P^{\prime}_{3} + P^{\prime}_{4} - 3P^{\prime}_{2} }}{4\Delta L}} \right) - \rho g.$$
In Eq. ( 10), the measured pressure values of sensor 2, sensor 3, and sensor 4 were used to mitigate the unsteady values of sensors 1 and 5 due to end effects. Δ L is the space between sensors and \(g\) is the gravitational acceleration.
Based on the experimental data of the flow rate, the pressure (in terms of Eqs. ( 9) and ( 10)), and the relationship between the axial flow resistance C f and the axial flow rate Re a can be obtained.
4 Results and Discussion
4.1 Experimental Results and Validation of the Simulation Method
The axial pressure-drop performance of the TCP flow was tested at five rotational speeds, which were transformed into rotational Reynolds numbers. The axial flow rates were set in the range of 0.96–2.16 kg/s, and transformed into axial Reynolds numbers. Both the experiments and simulations were carried out under the same parameters.
The results of the designed experimental test and simulation are shown in Figure 6, where the scatter points represent the experimental data and the dashed lines represent the computed results. With a fixed axial flow rate, the frictional coefficient increased with the rotational Reynolds number, which indicated that the cylinder rotation could lead to an increase in the axial flow resistance. It was also observed that the flow frictional coefficient decreased against the axial Reynolds number, and the decline became steeper with the increase in rotational Reynolds number.
Validation of the simulated C f against Re a and Re t with η = 0.968
The comparison displayed in Figure 6 showed good consistency between the experiments and simulations. Due to the test rig limitations, the radius ratio and axial Reynolds number were not large enough to make an analogy to the working situation of the RCP. In order to validate the numerical method for a wider scope, the experimental results of Yamada [ 4 ], with different radius ratios of 0.897 and 0.971 and a larger axial Reynolds number, were further compared with the results of the simulation.
The comparison between the simulation and Yamada's test is given in Figure 7. The simulated results agreed well with the experiments for different radius ratios and a larger axial Reynolds number, which indicates that the numerical method was effective in predicting the axial frictional coefficients of TCP flow.
Simulated C f compared with experiments: ( a) η = 0.897; ( b) η = 0.971
In addition to macro validation through the curves representing the relationship between C f and Re a, the simulated velocity profile was compared with the test results from Aubert [ 12 ] by means of a two-component LDV measurement. The normalized axial velocity and radius defined by Eq. ( 11) and Eq. ( 12) were used in the comparison:
$$U_{a}^{ *} = \frac{{U_{a} }}{{\bar{U}_{a} }} ,$$
$$r^{ *} = \frac{{r - r_{i} }}{d}.$$
The comparison of the axial velocity distribution from the simulations and Aubert's experiments are given in Figure 8. As shown in the figure, the flow divided into three regions: two boundary layers and a central region, and the simulated data were in good agreement with the measurements in the central region. The test was invalid in the boundary layers due to device constraints such as the wall curvature.
Simulated axial velocity profile compared with Aubert's experiments: ( a) Re t = 16971; ( b) Re t = 25140
A conclusion could be drawn from the above analysis that the numerical method, with periodic boundary conditions and the SST k- ω turbulence model, was effective in modeling and simulating TCP flow over a wide range when the rotational Reynolds number was in the turbulent state.
4.2 Factors Affecting the Axial Flow Resistance in TCP Flow
Two parameters, namely, the radius ratio and rotational Reynolds number, noted in the test and calculation, were investigated in detail by numerical methods.
Firstly, the radius ratio effect was examined. A total of four annuluses were designed based on the dimensions of the experimental device, with the same inner cylinder radius but various outer cylinder radii. The parameters are illustrated in Table 2.
Parameters of TCP flow with different radius ratios
r i
The frictional coefficients under two rotational speeds were calculated in each case, as shown in Figure 9. It can be seen that the lines clearly gathered into two groups with different rotational Reynolds numbers, symbolizing the influence of the rotational speed. With the increase in axial flow rate, the frictional coefficient decreased steeply and tended to be steady. Within the radius ratio range from 0.950 to 0.987, the influence of the radius ratio on the frictional coefficient, C f, could be ignored.
Results of the frictional coefficients with different radius ratios
Secondly, the effect of cylinder rotation was analyzed. The axial velocity profile was extracted, since the velocity gradient directly impacted the flow resistance. Seven sets of simulations were performed based on the experimental parameters, and the results are presented in Figure 10. The axial velocity in Figure 10(a) was normalized by Eqs. ( 11) and ( 12), and Figure 10(b) was nondimensionalized by Eqs. ( 6) and ( 8).
Axial velocity distribution with different Re t when Re a=13777: ( a) Axial velocity normalized by Eqs. ( 11) and ( 12); ( b) Axial velocity nondimensionalized by Eqs. ( 6) and ( 8)
Due to the rotation of the inner wall, the flow field underwent great changes compared with the non-rotating situation. It can be seen from Figure 10(a) that the curves became flatter in the central region along the gap with the increase in rotational Reynolds number. This resulted in the growth of the velocity gradient near the wall. This change can be explained by the transport theory of Paoletti [ 37 ] and Brauckmann [ 38 ], where circumferential shear motion brings in extra energy and increases the intensity of the turbulence.
The velocity profile, compared with the wall law proposed by Coles [ 39 ], is illustrated in Figure 10(b). The axial velocity profile, which was consistent with the wall law when there was no cylinder motion, deviated from the log law with an increase in cylinder rotation. A larger rotational Reynolds number resulted in a bigger deviation due to the additional tangential component. The conclusion can be drawn that the axial flow was hindered by the rotary motion and had to overcome the resistance of the external energy.
4.3 Prediction Model of the Axial Resistance in TCP Flow
As shown in Figure 6, Figure 7, and Figure 9, the relationship curves between the axial flow frictional coefficient and axial Reynolds number displayed two noticeable trends.
The frictional coefficient decreased and tended to be steady with the increase in axial Reynolds number;
The frictional coefficient could be considered infinitely large when the axial Reynolds number was close to zero.
These features make the curve appropriate for fitting by the power function. Suppose the curves meet the description of Eq. ( 13), in which the parameter B( Re t) symbolizes a function of the rotational Reynolds number, and C is the constant exponent. To simplify the solution of coefficients C and B, the natural logarithm of both sides of Eq. ( 13) is taken as denoted by Eq. ( 14):
$$C_{f} = B\left( {Re_{\text{t}} } \right) \cdot Re_{\text{a}}^{C} ,$$
$$\ln \left( {C_{f} } \right) = C\ln \left( {Re_{\text{a}} } \right) + { \ln }B\left( {Re_{\text{t}} } \right).$$
The experimental data in the rotary situation were also transformed by the natural logarithmic method and are presented in Figure 11. The points could be fitted linearly, and the slopes of the linear fittings appeared to be equal for different rotation speeds. The expression of the fitting function could be derived through the least squares method and is expressed in Eqs. ( 15) and ( 16), indicating the influence of the axial Reynolds number and rotational Reynolds number on the frictional coefficient in axial flow.
Logarithmic results of the experimental data and fitting
$$\ln \left( {C_{f} } \right) = - 1.027\ln \left( {Re_{\text{a}} } \right) + 1.234 \cdot Re_{\text{t}}^{0.135} ,$$
$$C_{f} = { \exp }\left( { 1. 2 3 4\left( {Re_{\text{t}} } \right)^{0. 1 3 5} } \right) \cdot Re_{\text{a}}^{ - 1.027} .$$
The established prediction models presented in Eqs. ( 15) and ( 16) were developed from the experimental results and subjected to certain axial Reynolds numbers. A total of four sets of simulations were carried out with a wider range of Re a values as shown in Figure 12. The dashed lines in the picture denote the results calculated with the empirical prediction model as expressed in Eqs. ( 15) and ( 16), and the lines with differently shaped points represent the simulated results from the Fluent software.
Simulated and empirical models over a vast range
Three distinct trends occurred with the increase in axial Reynolds number, as shown in Figure 12, and included a rotation dependent region, a transition region, and an ultimate region. In the rotation dependent region, cylinder rotation played an important role in affecting the flow resistance, while in the ultimate region, the curves converged together, and cylinder rotation had no apparent effect. This partitioning feature was consistent with the modes of the flow regimes that were proposed by Kaye Joseph [ 40 ]. Correspondingly, the flow regime in the rotation dependent region was turbulent flow plus Taylor vortices, and, in the ultimate region, was fully developed turbulent flow. In the ultimate turbulent region, the forecast formula
$$C_{f} = 0.0675Re_{\text{a}}^{ - 0.24} ,$$
denoted by Yamada, was relatively accurate compared with the simulation. By combining the empirical prediction model, as expressed in Eqs. ( 15) and ( 16), under the testing condition with the experimental results of Yamada from the higher axial flow rate region, the empirical model (illustrated in dashed lines) could be used to predict the relationship between the frictional coefficient, C f, and the axial Reynolds number, Re a, as well as the rotational Reynolds number, Re t, with slight differences from the simulated relations with the Fluent software. The transition point ( Re a′ , C f′ ) could be determined from Eqs. ( 16) and ( 17), which is ( \({ \exp }\left( {1.568 \cdot Re_{\text{t}}^{0.135} + 3.425} \right),\) \(0.0675{ \exp }\left( { - 0.376 \cdot Re_{\text{t}}^{0.135} - 0.822} \right)\)).
From what has been discussed above, the axial flow resistance of TCP flow in the RCP could be calculated by Eqs. ( 16) or ( 17). Taking the canned motor of the CAP1400 RCP as an example, when the axial Reynolds number was about 4.2 × 10 4 and the rotational Reynolds number was around 4.4 × 10 5, the logarithmic value of the frictional coefficient was − 3.8 in terms of Eq. ( 16). Simulations were also performed to investigate the axial flow resistance of TCP flow in the RCP at rated rotational speeds, and a comparison between the simulation and the prediction model is presented in Figure 13. It can be seen from the figure that the calculation results from the two methods were close, and the error percentage was within 6%. However, the prediction from the empirical model was much more efficient.
Prediction of the axial flow resistance in the canned part of the RCP
5 Conclusions
The axial flow resistance characteristics of TCP flow were investigated by both experimental and numerical methods. The following conclusions were drawn.
Based on experiments analyzing the relationship between the frictional coefficient, C f, and axial Reynolds number, Re a, as well as the rotational Reynolds number, Re t, an empirical prediction model, \(C_{f} = { \exp }\left( {1.234 \cdot Re_{\text{t}}^{0.135} } \right) \cdot Re_{\text{a}}^{ - 1.027}\), was developed. Combined with the empirical formula \(C_{f} = 0.0675Re_{\text{a}}^{ - 0.24}\) of Yamada with a large axial Reynolds number, an empirical prediction model, in the form of a polygonal approximation with logarithmic coordinates and a wider range of axial Reynolds numbers could be used to achieve a high efficiency prediction with almost the same accuracy as the numerical simulation.
The cylinder rotation exhibited a strong influence that increased the axial flow resistance at low axial flow rates, which degraded until invisible with increasing axial Reynolds numbers.
The radius ratio had less effect on the axial flow resistance compared with the influence of the rotational Reynolds number.
The unstable flow region, as indicated in the transition area of the empirical prediction model, should be prevented in the parameter design of the inner cooling clearance.
Zurück zum Zitat T L Schulz. Westinghouse AP1000 advanced passive plant. Nuclear Engineering Design, 2006, 236(14-16): 1547-1557. CrossRef T L Schulz. Westinghouse AP1000 advanced passive plant. Nuclear Engineering Design, 2006, 236(14-16): 1547-1557. CrossRef
Zurück zum Zitat M G Zheng, J Q Yan, S T Jun, et al. The general design and technology innovations of CAP1400. Engineering, 2016, 2(1): 97-102. CrossRef M G Zheng, J Q Yan, S T Jun, et al. The general design and technology innovations of CAP1400. Engineering, 2016, 2(1): 97-102. CrossRef
Zurück zum Zitat Y P Zhuang. Application of AP1000 canned motor pump. Electric Power Construction, 2010, 31(11): 98-101. (in Chinese) Y P Zhuang. Application of AP1000 canned motor pump. Electric Power Construction, 2010, 31(11): 98-101. (in Chinese)
Zurück zum Zitat Y Yamada. Resistance of a flow through an annulus with an inner rotating cylinder. Bulletin of JSME, 1962, 5(18): 302-310. CrossRef Y Yamada. Resistance of a flow through an annulus with an inner rotating cylinder. Bulletin of JSME, 1962, 5(18): 302-310. CrossRef
Zurück zum Zitat J M Nouri, J H Whitelaw. Flow of Newtonian and non-Newtonian fluids in a concentric annulus with rotation of the inner cylinder. Journal of Fluids Engineering, 1994, 116(4): 821-827. CrossRef J M Nouri, J H Whitelaw. Flow of Newtonian and non-Newtonian fluids in a concentric annulus with rotation of the inner cylinder. Journal of Fluids Engineering, 1994, 116(4): 821-827. CrossRef
Zurück zum Zitat Y J Kim, Y K Hwang. Experimental study on the vortex flow in a concentric annulus with a rotating inner cylinder. KSME International Journal, 2003, 17(4): 562-570. CrossRef Y J Kim, Y K Hwang. Experimental study on the vortex flow in a concentric annulus with a rotating inner cylinder. KSME International Journal, 2003, 17(4): 562-570. CrossRef
Zurück zum Zitat Y J Kim, S M Han, N S Woo. Flow of Newtonian and non-Newtonian fluids in a concentric annulus with a rotating inner cylinder. Korea-Australia Rheology Journal, 2013, 25(2): 77-85. CrossRef Y J Kim, S M Han, N S Woo. Flow of Newtonian and non-Newtonian fluids in a concentric annulus with a rotating inner cylinder. Korea-Australia Rheology Journal, 2013, 25(2): 77-85. CrossRef
Zurück zum Zitat N S Woo, Y J Kim, Y K Hwang. Experimental study on the helical flow in a concentric annulus with rotating inner cylinder. Journal of Fluids Engineering, 2005, 128(1): 113-117. CrossRef N S Woo, Y J Kim, Y K Hwang. Experimental study on the helical flow in a concentric annulus with rotating inner cylinder. Journal of Fluids Engineering, 2005, 128(1): 113-117. CrossRef
Zurück zum Zitat M Kristiawan, T Jirout, V Sobolík. Components of wall shear rate in wavy Taylor–Couette flow. Experimental Thermal and Fluid Science, 2011, 35(7): 1304-1312. CrossRef M Kristiawan, T Jirout, V Sobolík. Components of wall shear rate in wavy Taylor–Couette flow. Experimental Thermal and Fluid Science, 2011, 35(7): 1304-1312. CrossRef
Zurück zum Zitat S G Huisman, D P M Van Gils, C Sun. Applying laser Doppler anemometry inside a Taylor–Couette geometry using a ray-tracer to correct for curvature effects. European Journal of Mechanics - B/Fluids, 2012, 36: 115-119. CrossRef S G Huisman, D P M Van Gils, C Sun. Applying laser Doppler anemometry inside a Taylor–Couette geometry using a ray-tracer to correct for curvature effects. European Journal of Mechanics - B/Fluids, 2012, 36: 115-119. CrossRef
Zurück zum Zitat Y Hashemian, M Yu, S Miska, et al. Accurate predictions of velocity profiles and frictional pressure losses in annular YPL fluid flow. Journal of Canadian Petroleum Technology, 2014, 53(6): 355-363. CrossRef Y Hashemian, M Yu, S Miska, et al. Accurate predictions of velocity profiles and frictional pressure losses in annular YPL fluid flow. Journal of Canadian Petroleum Technology, 2014, 53(6): 355-363. CrossRef
Zurück zum Zitat A Aubert, S Poncet, P Le Gal, et al. Velocity and temperature measurements in a turbulent water-filled Taylor–Couette–Poiseuille system. International Journal of Thermal Sciences, 2015, 90: 238-247. CrossRef A Aubert, S Poncet, P Le Gal, et al. Velocity and temperature measurements in a turbulent water-filled Taylor–Couette–Poiseuille system. International Journal of Thermal Sciences, 2015, 90: 238-247. CrossRef
Zurück zum Zitat Y C Chong, D A Staton, M A Mueller, et al. Pressure loss measurement in rotor-stator gap of radial flux electrical machines. Proceedings of the 2014 International Conference on Electrical Machines, September 2-5, 2014: 2172-2178. Y C Chong, D A Staton, M A Mueller, et al. Pressure loss measurement in rotor-stator gap of radial flux electrical machines. Proceedings of the 2014 International Conference on Electrical Machines, September 2-5, 2014: 2172-2178.
Zurück zum Zitat Y C Chong, D A Staton, M A Mueller, et al. An experimental study of rotational pressure loss in rotor-stator gap. Propulsion and Power Research, 2017, 6(2): 147-156. CrossRef Y C Chong, D A Staton, M A Mueller, et al. An experimental study of rotational pressure loss in rotor-stator gap. Propulsion and Power Research, 2017, 6(2): 147-156. CrossRef
Zurück zum Zitat C Sun, Q Zhou. Experimental techniques for turbulent Taylor–Couette flow and Rayleigh–Bénard convection. Nonlinearity, 2014, 27(9): R89-R121. CrossRef C Sun, Q Zhou. Experimental techniques for turbulent Taylor–Couette flow and Rayleigh–Bénard convection. Nonlinearity, 2014, 27(9): R89-R121. CrossRef
Zurück zum Zitat A Nouri-Borujerdi, M E Nakhchi. Friction factor and Nusselt number in annular flows with smooth and slotted surface. Heat and Mass Transfer, 2018, 55(3): 645-653. CrossRef A Nouri-Borujerdi, M E Nakhchi. Friction factor and Nusselt number in annular flows with smooth and slotted surface. Heat and Mass Transfer, 2018, 55(3): 645-653. CrossRef
Zurück zum Zitat A Nouri-Borujerdi, M E Nakhchi. Optimization of the heat transfer coefficient and pressure drop of Taylor-Couette-Poiseuille flows between an inner rotating cylinder and an outer grooved stationary cylinder. International Journal of Heat and Mass Transfer, 2017, 108: 1449-1459. CrossRef A Nouri-Borujerdi, M E Nakhchi. Optimization of the heat transfer coefficient and pressure drop of Taylor-Couette-Poiseuille flows between an inner rotating cylinder and an outer grooved stationary cylinder. International Journal of Heat and Mass Transfer, 2017, 108: 1449-1459. CrossRef
Zurück zum Zitat I Azouz, S A Shirazi. Evaluation of several turbulence models for turbulent flow in concentric and eccentric annuli. Journal of Energy Resources Technology, 1998, 120(4): 268-275. CrossRef I Azouz, S A Shirazi. Evaluation of several turbulence models for turbulent flow in concentric and eccentric annuli. Journal of Energy Resources Technology, 1998, 120(4): 268-275. CrossRef
Zurück zum Zitat R Ostilla-Mónico, R J A M Stevens, S Grossmann, et al. Optimal Taylor–Couette flow: direct numerical simulations. Journal of Fluid Mechanics, 2013, 719: 14-46. CrossRef R Ostilla-Mónico, R J A M Stevens, S Grossmann, et al. Optimal Taylor–Couette flow: direct numerical simulations. Journal of Fluid Mechanics, 2013, 719: 14-46. CrossRef
Zurück zum Zitat R Ostilla-Mónico, E P V D Poel, R Verzicco, et al. Boundary layer dynamics at the transition between the classical and the ultimate regime of Taylor-Couette flow. Physics of Fluids, 2014, 26(1): 015114. CrossRef R Ostilla-Mónico, E P V D Poel, R Verzicco, et al. Boundary layer dynamics at the transition between the classical and the ultimate regime of Taylor-Couette flow. Physics of Fluids, 2014, 26(1): 015114. CrossRef
Zurück zum Zitat R Ostilla-Mónico, S G Huisman, T J G Jannink, et al. Optimal Taylor–Couette flow: radius ratio dependence. Journal of Fluid Mechanics, 2014, 747: 1-29. MathSciNetCrossRef R Ostilla-Mónico, S G Huisman, T J G Jannink, et al. Optimal Taylor–Couette flow: radius ratio dependence. Journal of Fluid Mechanics, 2014, 747: 1-29. MathSciNetCrossRef
Zurück zum Zitat R Ostilla-Mónico, R Verzicco, D Lohse. Effects of the computational domain size on direct numerical simulations of Taylor-Couette turbulence with stationary outer cylinder. Physics of Fluids, 2015, 27(2): 025110. CrossRef R Ostilla-Mónico, R Verzicco, D Lohse. Effects of the computational domain size on direct numerical simulations of Taylor-Couette turbulence with stationary outer cylinder. Physics of Fluids, 2015, 27(2): 025110. CrossRef
Zurück zum Zitat R Ostilla-Mónico, X Zhu, R Verzicco. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations. Journal of Physics: Conference Series, 2018, 1001: 012017. R Ostilla-Mónico, X Zhu, R Verzicco. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations. Journal of Physics: Conference Series, 2018, 1001: 012017.
Zurück zum Zitat A Ohsawa, A Murata, K Iwamoto. Through-flow effects on Nusselt number and torque coefficient in Taylor-Couette-Poiseuille flow investigated by large eddy simulation. Journal of Thermal Science and Technology, 2016, 11(2): JTST0031. CrossRef A Ohsawa, A Murata, K Iwamoto. Through-flow effects on Nusselt number and torque coefficient in Taylor-Couette-Poiseuille flow investigated by large eddy simulation. Journal of Thermal Science and Technology, 2016, 11(2): JTST0031. CrossRef
Zurück zum Zitat D Paghdar, S Jogee, K Anupindi. Large-eddy simulation of counter-rotating Taylor–Couette flow: The effects of angular velocity and eccentricity. International Journal of Heat and Fluid Flow, 2020, 81: 108514. CrossRef D Paghdar, S Jogee, K Anupindi. Large-eddy simulation of counter-rotating Taylor–Couette flow: The effects of angular velocity and eccentricity. International Journal of Heat and Fluid Flow, 2020, 81: 108514. CrossRef
Zurück zum Zitat C M Jacobs, Z J Qin, K Bremhorst. Comparison of RANS modeling with DNS and experimental data for a converging-divering nozzle and a rotating cylinder electrode. Proceedings of the Fifth International Conference on CFD in the Process Industries, Melbourne, Australia, December 13-15, 2006. C M Jacobs, Z J Qin, K Bremhorst. Comparison of RANS modeling with DNS and experimental data for a converging-divering nozzle and a rotating cylinder electrode. Proceedings of the Fifth International Conference on CFD in the Process Industries, Melbourne, Australia, December 13-15, 2006.
Zurück zum Zitat T P Dhakal, D K Walters. Curvature and rotation sensitive variants of the K-Omega SST turbulence model. Proceedings of the ASME 2009 Fluids Engineering Division Summer Meeting, Vail, Colorado, USA, August 2–6, 2009. T P Dhakal, D K Walters. Curvature and rotation sensitive variants of the K-Omega SST turbulence model. Proceedings of the ASME 2009 Fluids Engineering Division Summer Meeting, Vail, Colorado, USA, August 2–6, 2009.
Zurück zum Zitat T P Dhakal, D K Walters. A three-equation variant of the SST k-ω model sensitized to rotation and curvature effects. Journal of Fluids Engineering, 2011, 133(11): 111201-111209. CrossRef T P Dhakal, D K Walters. A three-equation variant of the SST k-ω model sensitized to rotation and curvature effects. Journal of Fluids Engineering, 2011, 133(11): 111201-111209. CrossRef
Zurück zum Zitat J L V Neto, A L Martins, A S Neto, et al. CFD applied to turbulent flows in concentric and eccentric annuli with inner shaft rotation. The Canadian Journal of Chemical Engineering, 2011, 89(4): 636-646. CrossRef J L V Neto, A L Martins, A S Neto, et al. CFD applied to turbulent flows in concentric and eccentric annuli with inner shaft rotation. The Canadian Journal of Chemical Engineering, 2011, 89(4): 636-646. CrossRef
Zurück zum Zitat D S Adebayo, A Rona. The three-dimensional velocity distribution of wide gap Taylor-Couette flow modelled by CFD. International Journal of Rotating Machinery, 2016, 2016: 1-11. CrossRef D S Adebayo, A Rona. The three-dimensional velocity distribution of wide gap Taylor-Couette flow modelled by CFD. International Journal of Rotating Machinery, 2016, 2016: 1-11. CrossRef
Zurück zum Zitat D S Adebayo, A Rona. The persistence of vortex structures between rotating cylinders in the 10 6 Taylor number range. International Review of Aerospace Engineering, 2015, 8(1): 16-25. CrossRef D S Adebayo, A Rona. The persistence of vortex structures between rotating cylinders in the 10 6 Taylor number range. International Review of Aerospace Engineering, 2015, 8(1): 16-25. CrossRef
Zurück zum Zitat D S Adebayo, A Rona. PIV study of the flow across the meridional plane of rotating cylinders with wide gap. 10th Pacific Symposium on Flow Visualization and Image Processing, Naples, Italy, 2015. D S Adebayo, A Rona. PIV study of the flow across the meridional plane of rotating cylinders with wide gap. 10th Pacific Symposium on Flow Visualization and Image Processing, Naples, Italy, 2015.
Zurück zum Zitat Ansys. Definition of the streamwise-periodic pressure. Ansys Fluent Theory Guide, 2016, 1(1.4): 1.4.3. Ansys. Definition of the streamwise-periodic pressure. Ansys Fluent Theory Guide, 2016, 1(1.4): 1.4.3.
Zurück zum Zitat F R Menter. Two-equation eddy-viscosity turbulence models for engineering applications. AIAA Journal, 1994, 32(8): 1598-1605. CrossRef F R Menter. Two-equation eddy-viscosity turbulence models for engineering applications. AIAA Journal, 1994, 32(8): 1598-1605. CrossRef
Zurück zum Zitat B Van Leer. Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov's method. Journal of Computational Physics, 1979, 32(1): 101-136. MathSciNetCrossRef B Van Leer. Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov's method. Journal of Computational Physics, 1979, 32(1): 101-136. MathSciNetCrossRef
Zurück zum Zitat J M Nouri, H Umur, J H Whitelaw. Flow of Newtonian and non-Newtonian fluids in concentric and eccentric annuli. Journal of Fluid Mechanics, 1993, 253: 617-641. CrossRef J M Nouri, H Umur, J H Whitelaw. Flow of Newtonian and non-Newtonian fluids in concentric and eccentric annuli. Journal of Fluid Mechanics, 1993, 253: 617-641. CrossRef
Zurück zum Zitat M S Paoletti, D P Lathrop. Angular momentum transport in turbulent flow between independently rotating cylinders. Physical Review Letters, 2011, 106(2): 024501. CrossRef M S Paoletti, D P Lathrop. Angular momentum transport in turbulent flow between independently rotating cylinders. Physical Review Letters, 2011, 106(2): 024501. CrossRef
Zurück zum Zitat H Brauckmann, B Eckhardt, J Schumacher. Heat transport in Rayleigh-Benard convection and angular momentum transport in Taylor-Couette flow: a comparative study. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2017, 375(2089): 1-17. CrossRef H Brauckmann, B Eckhardt, J Schumacher. Heat transport in Rayleigh-Benard convection and angular momentum transport in Taylor-Couette flow: a comparative study. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2017, 375(2089): 1-17. CrossRef
Zurück zum Zitat D Coles. The law of the wall in turbulent shear flow. 50 Jahre Grenzschicht Forschung, 1955: 153-163. D Coles. The law of the wall in turbulent shear flow. 50 Jahre Grenzschicht Forschung, 1955: 153-163.
Zurück zum Zitat J Kaye. Modes of adiabatic and diabatic fluid flow in an annulus with an inner rotating cylinder. Transactions of the American Society of Mechanical Engineering, 1958, 80: 753-765. J Kaye. Modes of adiabatic and diabatic fluid flow in an annulus with an inner rotating cylinder. Transactions of the American Society of Mechanical Engineering, 1958, 80: 753-765.
Shengde Wang
Zhenqiang Yao
Hong Shen
Kinematic and Dynamic Analysis of a 3-PRUS Spatial Parallel Manipulator
Design of Self-Reconfigurable Multiarm Robot Mechanism Based on Deployable Kinematic Chains
Intelligent Manufacturing Systems in COVID-19 Pandemic and Beyond: Framework and Impact Assessment
Effect of Agitator's Types on the Hydrodynamic Flow in an Agitated Tank
A Comparative Study of Fractional Order Models on State of Charge Estimation for Lithium Ion Batteries
Development and Analysis of a Closed-Chain Wheel-Leg Mobile Platform | CommonCrawl |
The future of mechanical ventilation: lessons from the present and the past
Luciano Gattinoni1,
John J. Marini2,
Francesca Collino1,
Giorgia Maiolo1,
Francesca Rapetti1,
Tommaso Tonetti1,
Francesco Vasques1 &
Michael Quintel1
Critical Care volume 21, Article number: 183 (2017) Cite this article
The adverse effects of mechanical ventilation in acute respiratory distress syndrome (ARDS) arise from two main causes: unphysiological increases of transpulmonary pressure and unphysiological increases/decreases of pleural pressure during positive or negative pressure ventilation. The transpulmonary pressure-related side effects primarily account for ventilator-induced lung injury (VILI) while the pleural pressure-related side effects primarily account for hemodynamic alterations. The changes of transpulmonary pressure and pleural pressure resulting from a given applied driving pressure depend on the relative elastances of the lung and chest wall. The term 'volutrauma' should refer to excessive strain, while 'barotrauma' should refer to excessive stress. Strains exceeding 1.5, corresponding to a stress above ~20 cmH2O in humans, are severely damaging in experimental animals. Apart from high tidal volumes and high transpulmonary pressures, the respiratory rate and inspiratory flow may also play roles in the genesis of VILI. We do not know which fraction of mortality is attributable to VILI with ventilation comparable to that reported in recent clinical practice surveys (tidal volume ~7.5 ml/kg, positive end-expiratory pressure (PEEP) ~8 cmH2O, rate ~20 bpm, associated mortality ~35%). Therefore, a more complete and individually personalized understanding of ARDS lung mechanics and its interaction with the ventilator is needed to improve future care. Knowledge of functional lung size would allow the quantitative estimation of strain. The determination of lung inhomogeneity/stress raisers would help assess local stresses; the measurement of lung recruitability would guide PEEP selection to optimize lung size and homogeneity. Finding a safety threshold for mechanical power, normalized to functional lung volume and tissue heterogeneity, may help precisely define the safety limits of ventilating the individual in question. When a mechanical ventilation set cannot be found to avoid an excessive risk of VILI, alternative methods (such as the artificial lung) should be considered.
For a reasonable number of years to come, mechanical ventilation will likely still be needed. We acknowledge the importance of stabilizing hemodynamics [1], achieving synchrony [2], preserving muscle strength [3, 4], avoiding the consequences of intubation [5], minimizing dynamic hyperinflation [6], and monitoring the biological reactions—all important goals of ventilatory support. In this brief review, however, we focus primarily on limiting tissue damage, thereby improving the safety of artificial ventilation. Further we will limit our analysis to ARDS patients, who are among the most problematic to manage among the mechanically ventilated patients. However, the principles of a safe treatment are equally applicable to all mechanically ventilated patients. To artificially inflate the lung (i.e., to increase the transpulmonary pressure (P L), airway pressure – pleural pressure (P aw – P pl)), two diametrically opposed options can be applied: either totally positive airway pressure ventilation associated with an increase of pleural pressure or totally negative pressure ventilation, in which the chest cage is expanded by external negative pressure. Between these two extremes, mixed forms of ventilation may be applied, primarily by providing positive pressure to the airways while allowing spontaneous contraction of the respiratory muscles, which decrease pleural pressure during inspiration (Table 1). To discuss the future we must first understand the current problems associated with mechanical ventilation.
Table 1 'Motors' of the lung and chest wall during positive and negative ventilation
Adverse effects of mechanical ventilation
The adverse effects of mechanical ventilation may be grouped into two main categories. One category relates to excessive/unphysiological transpulmonary pressure (always positive), and the other relates to excessive/unphysiological variation of pleural pressure, either positive or negative (Fig. 1).
Changes of transpulmonary pressure (∆P L) and of pleural pressure (∆P pl) during negative or positive pressure ventilation. Left: possible adverse consequences due to the progressive decrease or progressive increase of pleural pressure (∆P pl). The key variation is the increase or decrease of venous return, respectively. Right: sequence of possible damage when progressively increasing the transpulmonary pressure (∆P L). Either during negative pressure ventilation (here performed at baseline atmospheric pressure, i.e., 0 cmH2O) or during positive pressure ventilation, ∆P L is always positive. See text for details. ∆P aw change in airway pressure
Side effects associated with pleural pressure
The magnitude and direction of change in pleural pressure, negative or positive, depends on the ratio of chest wall elastance (E W) relative to the elastance of the respiratory system (E tot). The latter equals the sum of the chest wall elastance and the lung elastance (E L). Accordingly, during positive pressure ventilation the following relationship applies under static conditions [7]:
$$ \varDelta {P}_{\mathrm{pl}}=\varDelta {P}_{\mathrm{aw}}\cdot \frac{E_{\mathrm{w}}}{E_{\mathrm{tot}}} $$
During negative pressure ventilation, however, where the inflation-producing change in pressure is a reduction in the pressure surrounding the respiratory system (ΔPneg), the following applies:
$$ -\varDelta {P}_{\mathrm{pl}}=\varDelta {P}_{\mathrm{neg}}\cdot \frac{E_{\mathrm{w}}}{E_{\mathrm{tot}}} $$
Note that, in ARDS, the E W/E tot ratio averages 0.7, but may range from 0.2 to 0.8 [8].
Obviously, in the presence of an artificial ventilation mode where positive pressure may work simultaneously with muscular efforts (\( \Delta {P}_{musc}\Big) \) (Table 1), the actual changes of pleural pressure result from two 'push–pull' forces. Accordingly:
$$ \varDelta {P}_{pl}=\varDelta {P}_{\mathrm{aw}}\cdot \frac{E_{\mathrm{w}}}{E_{\mathrm{tot}}}-\varDelta {P}_{\mathrm{musc}}\cdot \frac{E_{\mathrm{L}}}{E_{\mathrm{tot}}} $$
Positive pleural pressure
For passive inflation by a given airway pressure, the pleural pressure will increase far more in the presence of elevated chest wall elastance (i.e., elevated E W/E tot), as in some cases of extreme obesity [9], whereas it will increase far less in the presence of elevated lung elastance (i.e., low E W/E tot; see Eq. (1)). All equations to which we refer only approximate what is actually happening in the pleural space, because in reality the pleural pressure is not uniform along the thoracic cage, but rather depends on several factors, such as gravitational gradients and local pressure distortions arising from anatomical differences in the shapes of the lung and its chest wall enclosure [10]. Despite the limitations in accurately determining pleural pressure [11, 12], its changing value influences central vascular pressures and venous return. A large experimental and clinical literature describes all of the possible complications related to ventilation-caused decreases of effective circulating volume. These are particularly likely to occur when pleural pressure remains positive throughout the entire respiratory cycle, as during ventilation with positive end-expiratory pressure (PEEP) [13]. The kidney [14], liver [15], and bowel [16, 17] may all be impaired or damaged by the resulting venous congestion and reduced perfusion.
Negative pleural pressure
Excessively negative pleural pressure may arise during spontaneous breathing, especially when vigorous respiratory effort is applied to a 'stiff lung' (see Eq. (3)). In ARDS, for example, negative swings in esophageal pressure may exceed 20–25 cmH2O, due to profoundly dysregulated respiratory drive [18]. Apart from increasing the work of breathing and oxygen consumption, such excessively negative intrathoracic and interstitial pressures promote venous return and increase edema formation. Such phenomena, well described by Barach et al. in 1938 [19], have deservedly been reemphasized for the current era of positive pressure ventilation [20]. Recent work has demonstrated that pedelluft phenomena which occur during vigorous breathing efforts in injured lungs have the potential to amplify local strains and could conceivably contribute to tissue damage [21,22,23]. In concept, certain asynchronies between the patient and ventilator (e.g., double triggering and breath stacking) may also be injurious when they occur frequently and/or in groups.
Adverse effects associated with transpulmonary pressure
The adverse effects of excessive transpulmonary pressure were recognized soon after mechanical ventilation was first applied in patients with ARDS [24]. In those early years the initial therapeutic targets were to maintain normal blood gases and to avoid dyssynchrony while limiting the use of muscle relaxants, which understandably were considered hazardous when using the poorly alarmed ventilators of that era. Consequently, tidal volumes and respiratory rates were typically 15 ml/kg and 15–20 bpm, respectively [25]. Using this approach, few patients fought the ventilator, but barotrauma (primarily pneumothorax) occurred quickly and commonly. This event was so frequent that preventive use of bilateral chest tubes was suggested when ventilation for ARDS was initiated [26]. 'Barotrauma' was used to collectively identify the clinically recognizable problems of gas escape: pneumothorax, pneumomediastinum, interstitial emphysema [27,28,29,30], gas embolism [31], etc. Used in a broader sense, however, barotrauma also includes VILI.
A different viewpoint was elaborated by Dreyfuss et al. [32], who emphasized the role of lung distention (strain) as opposed to airway pressure. High airway pressures were applied without excessive lung strain or damage by restricting chest wall movement. Conversely, injury ('volutrauma') was inflicted by similar airway pressures in the absence of chest wall restraint. Barotrauma and volutrauma, however, are two faces of the same coin if we consider that the force distending the lung is not the airway pressure, but the transpulmonary pressure (i.e., P aw – P pl). This variable more accurately reflects the stress applied to lung structures. Indeed, the following relationship holds [7]:
$$ {P}_{\mathrm{L}}={E}_{Lspec}\cdot \frac{\varDelta V}{FRC} $$
Here, \( \Delta V \) is the change in lung volume in reference to its resting (unstressed) value, functional residual capacity (FRC), and \( {E}_{Lspec} \) is the tissue elastance of the lung, elastance referenced to the lung's absolute inflation capacity.
In words, Eq. (4) can be expressed as:
$$ S t r e s s={E}_{Lspec}\cdot S t r a i n $$
implying:
$$ B a r o t r a u m a= k\cdot V o l u t r a u m a $$
Therefore, stress and strain are related by a proportionality constant, equivalent to specific elastance \( {E}_{Lspec} \). This value, which is similar in normal subjects and in acute lung injury patients, averages ~12 cmH2O [8]. In other words, 12 cmH2O is the stress developed in lung structures when the resting volume (FRC) is doubled. Indeed, at total inspiratory capacity the stress would be ~24 cmH2O because the ∆V/FRC ratio is then ~2. Experimental studies indicate that barotrauma/volutrauma requires some regions of the lung to reach the 'their own' total lung capacity [33]. At this level, the collagen framework is fully distended and works as a 'stop length' restraint. These concepts are summarized in Fig. 2 and form a basis for understanding barotrauma and volutrauma.
Lung strain (tidal volume/FRC) as a function of lung stress (transpulmonary pressure). Data adapted from Agostoni and Hyatt [74]. As shown, the doubling of the FRC occurs at a transpulmonary pressure of 12 cmH2O (specific elastance). We arbitrarily indicated the 'risky' zone of P L as that which corresponds to lung strains exceeding 1.5 (based on experimental data [52]). P L transpulmonary pressure
Volutrauma
In comparative studies investigating the role of volutrauma on outcome, tidal volume has usually been expressed per kilogram of ideal (predicted) body weight (PBW) in an attempt to relate tidal volume to the expected lung size. Unfortunately, due to the variability of the aeratable lung size in ARDS (the concept of 'baby lung' [34]), such normalization fails as a surrogate for lung strain. Despite these limitations, the ARDS Network [35] found a 9% survival benefit in an unselected ARDS sample when using 6 ml/kg PBW tidal volume instead of 12 ml/kg PBW. Of note, this advantage was also found in the quartile of patients with less severe ARDS, where the 'baby lung' size was likely greater [36]. It seems plausible that the inverse correlation between survival and dead space [37], as reflected by hypercapnia, may relate to the relative sizes of the functioning baby lungs and the strains that they undergo with 'lung protective' ventilation [38]. A tidal volume per kilogram exceeding 20–30 ml/kg is required to damage the healthy lungs of experimental animals [39,40,41,42,43]. Although a direct comparison between healthy and ARDS lungs is highly questionable, the mechanical characteristics of the 'baby lung' (i.e., its specific compliance) are similar to those of normal subjects. The ARDS Network mandate to avoid high tidal volumes deeply and appropriately influenced clinical practice. However, volutrauma may best be avoided by considering not simply the tidal volume but the strain (i.e., the ratio of tidal volume to the resting lung volume). In this context, the recently redirected focus on driving pressure (which equals the ratio of tidal volume to compliance) rather than on plateau pressure alone has a rough parallel with this admonition [44]. We must also remind ourselves that in prior randomized controlled trials [45,46,47], the ARDS patients exposed to ~10 ml/kg tidal volume experienced better survival compared to patients exposed to ~7 ml/kg. Therefore, decreases of tidal volume below 6 ml/kg, as proposed for 'ultraprotective ventilation' (associated with extracorporeal CO2 removal) would not necessarily be of benefit, because severe hypoventilation and reabsorption atelectasis may offset its putative advantages unless other preventative or compensatory measures are taken to raise mean airway pressure, with consequent increase of global lung stress [48, 49]. Attention should be paid to avoiding not only excessively high strain, but also unphysiologically low strain.
In the editorial accompanying the ARMA trial, 32 cmH2O plateau pressure was suggested as an upper safety limit for (passive) mechanical ventilation [50]. Since then, the 30 cmH2O limit became infrequently challenged dogma for both clinical practice and clinical trials. Actually, in a normal 70-kg human (FRC ~2000 ml and compliance ~80 ml/cmH2O), the 30 cmH2O plateau would correspond to a tidal volume of ~2400 ml (strain = 1.2). In normal animals, this strain is nearly harmless if applied at a respiratory rate of 15 bpm for 54 hours [51]. The applied transpulmonary pressure in this condition, assuming similar chest wall and lung elastances, would be ~15 cmH2O (see Fig. 2). However, as already stated, in ARDS the ratio between lung elastance and the total respiratory system elastance may vary from 0.2 to 0.8 [8]. Because the transpulmonary pressure equals the applied airway pressure times the E L/E tot ratio, the 'safe' 30 cmH2O may result in a transpulmonary pressure as low as 6 cmH2O or as high as 24 cmH2O, a value approaching that needed to reach total lung capacity (Fig. 2), and may be lethal to animals [52]. Therefore, the use of 30 cmH2O, in a given subset of patients may result either in excessive strain or in hypoventilation and hypoxemia. This was likely the case of many patients with low E L/E tot ratios (i.e., pregnant women or obese patients) during the H1N1 epidemics in Australia and New Zealand [53]. In some of those patients, ECMO perhaps could have been avoided, simply by safely increasing the plateau pressure, as we found in a cohort of H1N1 patients (ECMO candidates), where low E L/E tot was documented [54]. Just as for volutrauma it is wiser to consider strain instead of the tidal volume, for barotrauma it is wiser to consider transpulmonary pressure instead of plateau airway pressure (see Eq. (6)).
Consequences associated with other ventilatory variables
Although most of the studies dealing with VILI concentrate on the static components of the breath (tidal volume, plateau pressure, and PEEP), other important factors should not be ignored. The most relevant, in our opinion, are the respiratory rate (i.e., how many times per minute a potential volutrauma or barotrauma is delivered) and the inspiratory flow rate (i.e., how fast a potential volutrauma or barotrauma is applied).
The respiratory rate has been considered relatively inconsequential, because it is usually set to maintain PaCO2 within an acceptable range. Thus, in the milestone ARDS Network trial, the lower tidal volume was associated with a respiratory rate of 29 bpm, compared to 16 bpm in the higher tidal volume group. Nonetheless, under certain conditions the respiratory rate is unlikely to be innocent in the genesis of VILI. The harm resulting from raising the respiratory rate is almost certain to be conditioned by the dynamic stress of the individual tidal cycle [55]. The analogy with metal fatigue, which is a function of the number of high stress cycles, may help to frame the role of respiratory rate as codeterminant of VILI. Both in isolated lungs and large-size animals, reducing the respiratory rate provides definite advantages in reducing VILI [56, 57]. Conversely, when operated in an elevated pressure range, perhaps high-frequency ventilation with small tidal volumes may inflict damage [58].
Inspiratory flow
The potential for high inspiratory flow to contribute to VILI likely relates to locally intensified concentration of stress, a problem influenced by viscoelastic tissue properties. Experimental literature consistently shows that, for a given plateau pressure, or a given strain, the rate at which the volume was delivered (i.e., the inspiratory flow) plays a definite role in the genesis of VILI [33, 59,60,61]. Although one would logically expect that any damage attributed to high inspiratory flow should primarily concentrate in the airway, high inspiratory flow accentuates damage to the lung parenchyma, in all likelihood because viscoelastic accommodation has insufficient time to dissipate damaging forces when inflation occurs quickly. Flow rate assumes a greater role in a mechanically inhomogeneous lung (e.g., ARDS) than in a homogeneous one. Moreover, a tidal volume delivered by pressure control could be more dangerous than if achieved by flow-controlled, volume-cycled ventilation with constant flow, because in the former the peak inspiratory flow may reach far higher values. Finally, although little studied, control of expiratory flow may potentially attenuate microatelectasis and influence stresses that occur as tissues rearrange themselves during deflation.
Present-day mechanical ventilation
Table 2 presents ventilatory data and outcomes of different populations treated over the years for ARDS. The observational studies presented are the 2002 study by Esteban et al. [62], the 2011 study by Villar et al. [63], and the 2016 study by Bellani et al. [64]. These three studies include unselected ARDS patients and should reflect daily practice. For comparison, we added the ventilatory treatments and outcomes of patients enrolled in randomized trials, filtered through exclusion criteria from a wider ARDS population. In comparison to tidal volume, more attention seems to have been paid to the plateau pressure, which has been held consistently below 30 cmH2O after the ARDS Network ARMA trial. The respiratory rate did not change remarkably, because it seems to be dictated by the aim of maintaining PaCO2 within normal limits of 35–45 mmHg. PEEP values consistently averaged 7–8 cmH2O, with levels up to 15 cmH2O systematically applied only in clinical trials. Considering the ventilatory data reported in the largest and most recent survey by Bellani et al. [64], we may wonder what mortality fraction is attributable to VILI in patients ventilated with tidal volume of 7.6 ml/kg PBW, respiratory rate of 18.6 bpm, and PEEP of 8.4 cmH2O. To date, we do not believe it is possible to answer this question, which is of paramount importance in improving future mechanical ventilation. Indeed, if the mortality attributable to VILI is now already very low, we cannot expect any great improvement from modifying our current ventilatory practice. We must first better understand the roles played by the mechanical ventilator's settings, the underlying lung pathophysiology, and their interaction.
Table 2 Mechanical ventilation settings through the years
The future of mechanical ventilation
Ideally, mechanical ventilation should be applied so as to avoid all adverse side effects, including VILI. To rationally approach this task, we believe it necessary to characterize much better than we do now the pathophysiology of the lung parenchyma to which the mechanical ventilation is applied and to fully understand the potential harm of each component of the ventilatory set.
Lung-related causes of VILI
The primary conditions influencing the occurrence of VILI are baby lung size, parenchymal recruitability, and extent of lung inhomogeneity. The routine measurement of the lung size would allow the assessment of average lung strain. The precise assessment of recruitability, which currently requires imaging techniques, will facilitate both increasing functional lung size and preventing/limiting atelectrauma by selecting 'adequate' PEEP. Lung inhomogeneity likely promotes VILI. In healthy animals, VILI requires tidal volumes as high as 30–40 ml/kg [39,40,41,42,43, 51]. In contrast, 12 ml/kg appear sufficient, in ARDS patients, even in those with better lung compliance (i.e., with likely greater lung size) [36]. Because the possible alterations within the baby lung (i.e., a deficit of surfactant, the presence of some edema, and fibrosis in the extracellular matrix) are per se protective against excessive strain, additional factors seem necessary to account for the damage. These may be the lung parenchyma inhomogeneities that locally increase the stress and strain (stress raisers). In the classic theoretical model of Mead et al. [65], the inhomogeneity occurring at the interface between a fully open unit (volume = 10) and a fully closed unit (volume = 1) will cause a pressure rise proportional to the exponent 2/3 of their ratio (i.e., (10/1)2/3). The proposed exponent of 2/3 is an approximation to convert volume (cm3) to surface area (cm2), as stress relates to surface area (force divided by surface area). Because 102/3 = 4.64, an applied pressure at the airway of 30 cmH2O would result, according to the Mead et al. model, in a local tension approximating a pressure of ~140 cmH2O applied to a fully homogeneous and open lung. When we estimated lung inhomogeneity with a CT scan, we found that the multiplication factor between units with different volumes is ~2, but more than enough to locally expand some units to their own TLC [66]. More than 40% of the lung volume in severe ARDS may be subject to this stress-raising phenomenon, emphasizing the importance of designing maneuvers able to decrease lung inhomogeneity.
Ventilator-related causes of VILI: the mechanical power
All of these mechanical factors discussed separately (volume, pressure, rate, and flow) can be considered parts of a single physical entity: the mechanical power. The equation describing power (Fig. 3) may be easily derived by multiplying the classical equation of motion by the tidal volume and respiratory rate [67]. Indeed, the energy cost per cycle is computed as the product of pressure times the change of volume, which, when multiplied by the respiratory rate, gives the power value (energy/unit of time). Total pressure is spent in performing elastic work (elastance times tidal volume), in moving gas (flow times resistance), and in maintaining end-expiratory lung volume (by PEEP). If each of these elements is multiplied by the tidal volume, the energy per breath is obtained, and by multiplying this by the respiratory rate we obtain the mechanical power. This equation is presented in this extended form, instead of other possible simplified versions [67], to illustrate item by item the determinants of power. A comparison of exponents indicates that tidal volume (and its associated driving pressure) and inspiratory flow are quantitatively potent determinants (\( {Power}_{rs}= k*\Delta {V}^2 \) and \( {Power}_{rs}= k*{flow}^2 \)), followed by the respiratory rate (\( {Power}_{rs}= k*{RR}^{1.4} \)), and then by PEEP, elastance, and resistance (all three linearly correlated with the mechanical power). Clearly, reduction of ventilatory demand to reduce tidal volume, flow, and/or respiratory rate should be prioritized if applying damaging power is to be avoided.
Upper box: simplified equation of motion, showing that, at any given moment, the pressure in the respiratory system (P) above the relaxed volume equals the sum of the elastic pressure (elastance of the respiratory system E rs times change in lung volume), plus the pressure needed to move the gases (flow F times airway resistance), plus the pressure (if any) to keep the lung pressure above the atmospheric pressure at end expiration (PEEP). If each of these three components is multiplied by the tidal change in lung volume ∆V, the energy per breath is obtained. If multiplied by the respiratory rate, the corresponding power equation is obtained. 0.098 is the conversion factor from liters/cmH2O to Joules (J). I:E inspiratory–expiratory ratio, PEEP positive end-expiratory pressure, Power rs mechanical power to the respiratory system, RR respiratory rate, ∆V change of volume R aw airways resistances
Although the concept of mechanical power may appeal as a unifying variable with which to track VILI risk (both during controlled and spontaneously assisted breathing), several challenges must be met before it can be implemented in practice: first, power must be normalized either for a standard lung volume or for the amount of aerated lung tissue [68, 69]; and second, the relationship between the power delivered to the whole respiratory system and that actually delivered to the lung (using the transpulmonary pressure) must be differentiated. In particular, the impact of inspiratory flow and tissue resistance should be better defined. From a practical perspective, even if appropriately adjusted for resistance, flow, and chest wall elastance, any estimate of lung-delivered power made using airway pressure alone during spontaneous efforts would reflect only the machine's contribution to the total energy imparted during inflation [33]. In addition, the distribution of mechanical power throughout the lung parenchyma must be determined. We do not know whether it follows the same maldistribution of stress and strain dictated by lung inhomogeneity [66]. Finally, mechanical power as defined here relates to the inspiratory phase; it is very possible that the expiratory phase may also play a role. Indeed, all of the energy accumulated at end inspiration must have dissipated both into the lung structures and the atmosphere when exhalation is complete. It is interesting and potentially important to know whether controlling expiratory flow (which decreases the fraction of energy expended into the lung) thereby helps to reduce VILI. Actually, such a phenomenon has been reported in two studies not normally considered in the VILI literature [70, 71]. Fig. 4 summarizes all of these concepts, and also suggests a slightly different nomenclature which we believe to be less confusing than that currently employed.
Left: baseline energy (red hatched triangle ABE), on which the inspiratory energy associated with the tidal volume (area BCDE) is added. Yellow hatched area to the right of line BC represents the inspiratory dissipated energy needed to move the gas, to overcome surface tension forces, to make the extracellular sheets slide across one another (tissue resistances), and possibly to reinflate collapsed pulmonary units. Light green hatched area on the left of line BC defines the elastic energy (trapezoid EBCD) cyclically added to the respiratory system during inspiration. Total area included in the triangle ACD is the total energy level present in the respiratory system at end inspiration. Right: energy changes during expiration. Of the total energy accumulated at end inspiration (triangle ACD), the area of the trapezoid EBCD is the energy released during expiration. The fraction of energy included in the hysteresis area (light blue hatched area) is dissipated into the respiratory system, while the remaining area (dark blue hatched area) is energy dissipated into the atmosphere through the connecting circuit. Note that whatever maneuver (as controlled expiration) reduces the hysteresis area will reduce the energy dissipated into the respiratory system (potentially dangerous?). PEEP positive end-expiratory pressure (Color figure online)
To minimize adverse interactions between lung pathology and ventilatory settings that promote VILI requires two distinct strategies: on one side, decreasing the inspiratory (and possibly the expiratory) mechanical power and damaging strain should decrease VILI; and on the other, steps to increase lung homogeneity should decrease the likelihood of injury. The best available maneuver to encourage mechanical homogeneity, supported by solid pathophysiological background [72] and proven clinical results, is prone positioning for those patients in whom inhomogeneity is prevalent (moderate-severe and severe ARDS) [73].
In conclusion, we believe that a possible pathway toward 'improved' mechanical ventilation for a future patient would consist of the following steps:
Define excessive strain and mechanical power, normalized for lung volume.
Measure/estimate lung inhomogeneity to assess the prevalence of stress raisers and the distribution of mechanical power/stress–strain.
Determine whether a given ventilatory set applied to the lung parenchyma of which the mechanical characteristics are known is associated with risk of VILI and how much.
If a mechanical ventilation set cannot be found to avoid an excessive risk of VILI, alternative methods (as the artificial lung) should be considered.
∆V :
change of volume
ARMA:
Low tidal volume trial of the ARDS Network
bpm :
breaths per minute
CO 2 :
ECMO:
E L :
Lung elastance
E Lspec :
Specific lung elastance
E tot :
Total elastance of the respiratory system
E w :
Chest wall elastance
FRC :
Functional residual capacity
PaCO 2 :
Arterial partial pressure of carbon dioxide
P aw :
Airway pressure
PBW :
Predicted body weight
PEEP :
Positive end-expiratory pressure
P L :
Transpulmonary pressure
P musc :
Pressure generated by the respiratory muscles
Power rs :
Mechanical power to the respiratory system
P pl :
Pleural pressure
RR :
VILI:
Ventilator-induced lung injury
Vieillard-Baron A, et al. Experts' opinion on management of hemodynamics in ARDS patients: focus on the effects of mechanical ventilation. Intensive Care Med. 2016;42(5):739–49.
Beitler JR, et al. Quantifying unintended exposure to high tidal volumes from breath stacking dyssynchrony in ARDS: the BREATHE criteria. Intensive Care Med. 2016;42(9):1427–36.
Files DC, Sanchez MA, Morris PE. A conceptual framework: the early and late phases of skeletal muscle dysfunction in the acute respiratory distress syndrome. Crit Care. 2015;19:266.
Petrof BJ, Hussain SN. Ventilator-induced diaphragmatic dysfunction: what have we learned? Curr Opin Crit Care. 2016;22(1):67–72.
American Thoracic Society, Infectious Diseases Society of America. Guidelines for the management of adults with hospital-acquired, ventilator-associated, and healthcare-associated pneumonia. Am J Respir Crit Care Med. 2005;171(4):388–416.
Vieillard-Baron A, Jardin F. The issue of dynamic hyperinflation in acute respiratory distress syndrome patients. Eur Respir J Suppl. 2003;42:43s–7s.
Gattinoni L, et al. Physical and biological triggers of ventilator-induced lung injury and its prevention. Eur Respir J Suppl. 2003;47:15s–25s.
Chiumello D, et al. Lung stress and strain during mechanical ventilation for acute respiratory distress syndrome. Am J Respir Crit Care Med. 2008;178(4):346–55.
Pelosi P, et al. Total respiratory system, lung, and chest wall mechanics in sedated-paralyzed postoperative morbidly obese patients. Chest. 1996;109(1):144–51.
Vawter DL, Matthews FL, West JB. Effect of shape and size of lung and chest wall on stresses in the lung. J Appl Physiol. 1975;39(1):9–17.
Akoumianaki E, et al. The application of esophageal pressure measurement in patients with respiratory failure. Am J Respir Crit Care Med. 2014;189(5):520–31.
Mauri T, et al. Esophageal and transpulmonary pressure in the clinical setting: meaning, usefulness and perspectives. Intensive Care Med. 2016;42(9):1360–73.
Annat G, et al. Effect of PEEP ventilation on renal function, plasma renin, aldosterone, neurophysins and urinary ADH, and prostaglandins. Anesthesiology. 1983;58(2):136–41.
Kuiper JW, et al. Mechanical ventilation and acute renal failure. Crit Care Med. 2005;33(6):1408–15.
Bredenberg CE, Paskanik A, Fromm D. Portal hemodynamics in dogs during mechanical ventilation with positive end-expiratory pressure. Surgery. 1981;90(5):817–22.
Mutlu GM, Mutlu EA, Factor P. GI complications in patients receiving mechanical ventilation. Chest. 2001;119(4):1222–41.
Putensen C, Wrigge H, Hering R. The effects of mechanical ventilation on the gut and abdomen. Curr Opin Crit Care. 2006;12(2):160–5.
Gama de Abreu M, Guldner A, Pelosi P. Spontaneous breathing activity in acute lung injury and acute respiratory distress syndrome. Curr Opin Anaesthesiol. 2012;25(2):148–55.
Barach AL, Martin J, Eckman M. Positive pressure respiration and its application to the treatment of acute pulmonary edema. Ann Intern Med. 1938;12:754–95.
Brochard L, Slutsky A, Pesenti A. Mechanical ventilation to minimize progression of lung injury in acute respiratory failure. Am J Respir Crit Care Med. 2017;195(4):438–42.
Yoshida T, et al. Spontaneous breathing during lung-protective ventilation in an experimental acute lung injury model: high transpulmonary pressure associated with strong spontaneous breathing effort may worsen lung injury. Crit Care Med. 2012;40(5):1578–85.
Yoshida T, et al. Spontaneous effort causes occult pendelluft during mechanical ventilation. Am J Respir Crit Care Med. 2013;188(12):1420–7.
Yoshida T, et al. Spontaneous effort during mechanical ventilation: maximal injury with less positive end-expiratory pressure. Crit Care Med. 2016;44(8):e678–88.
Kumar A, et al. Pulmonary barotrauma during mechanical ventilation. Crit Care Med. 1973;1(4):181–6.
Pontoppidan H, Geffin B, Lowenstein E. Acute respiratory failure in the adult. 2. N Engl J Med. 1972;287(15):743–52.
Hayes DF, Lucas CE. Bilateral tube thoracostomy to preclude fatal tension pneumothorax in patients with acute respiratory insufficiency. Am Surg. 1976;42(5):330–1.
Zimmerman JE, Dunbar BS, Klingenmaier CH. Management of subcutaneous emphysema, pneumomediastinum, and pneumothorax during respirator therapy. Crit Care Med. 1975;3(2):69–73.
de Latorre FJ, et al. Incidence of pneumothorax and pneumomediastinum in patients with aspiration pneumonia requiring ventilatory support. Chest. 1977;72(2):141–4.
Woodring JH. Pulmonary interstitial emphysema in the adult respiratory distress syndrome. Crit Care Med. 1985;13(10):786–91.
Gammon RB, Shin MS, Buchalter SE. Pulmonary barotrauma in mechanical ventilation. Patterns and risk factors. Chest. 1992;102(2):568–72.
Marini JJ, Culver BH. Systemic gas embolism complicating mechanical ventilation in the adult respiratory distress syndrome. Ann Intern Med. 1989;110(9):699–703.
Dreyfuss D, et al. High inflation pressure pulmonary edema. Respective effects of high airway pressure, high tidal volume, and positive end-expiratory pressure. Am Rev Respir Dis. 1988;137(5):1159–64.
Protti A, et al. Role of strain rate in the pathogenesis of ventilator-induced lung edema. Crit Care Med. 2016;44(9):e838–45.
Gattinoni L, Pesenti A. The concept of "baby lung". Intensive Care Med. 2005;31(6):776–84.
ARDS Network. Ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome. The Acute Respiratory Distress Syndrome Network. N Engl J Med. 2000;342(18):1301–8.
Hager DN, et al. Tidal volume reduction in patients with acute lung injury when plateau pressures are not high. Am J Respir Crit Care Med. 2005;172(10):1241–5.
Nuckton TJ, et al. Pulmonary dead-space fraction as a risk factor for death in the acute respiratory distress syndrome. N Engl J Med. 2002;346(17):1281–6.
Nin N, et al. Severe hypercapnia and outcome of mechanically ventilated patients with moderate or severe acute respiratory distress syndrome. Intensive Care Med. 2017;43(2):200–8.
Webb HH, Tierney DF. Experimental pulmonary edema due to intermittent positive pressure ventilation with high inflation pressures. Protection by positive end-expiratory pressure. Am Rev Respir Dis. 1974;110(5):556–65.
Kolobow T, et al. Severe impairment in lung function induced by high peak airway pressure during mechanical ventilation. An experimental study. Am Rev Respir Dis. 1987;135(2):312–5.
Broccard A, et al. Prone positioning attenuates and redistributes ventilator-induced lung injury in dogs. Crit Care Med. 2000;28(2):295–303.
Nishimura M, et al. Body position does not influence the location of ventilator-induced lung injury. Intensive Care Med. 2000;26(11):1664–9.
Belperio JA, et al. Critical role for CXCR2 and CXCR2 ligands during the pathogenesis of ventilator-induced lung injury. J Clin Invest. 2002;110(11):1703–16.
Amato MB, et al. Driving pressure and survival in the acute respiratory distress syndrome. N Engl J Med. 2015;372(8):747–55.
Brochard L, et al. Tidal volume reduction for prevention of ventilator-induced lung injury in acute respiratory distress syndrome. The Multicenter Trail Group on Tidal Volume reduction in ARDS. Am J Respir Crit Care Med. 1998;158(6):1831–8.
Stewart TE, et al. Evaluation of a ventilation strategy to prevent barotrauma in patients at high risk for acute respiratory distress syndrome. Pressure- and Volume-Limited Ventilation Strategy Group. N Engl J Med. 1998;338(6):355–61.
Brower RG, et al. Prospective, randomized, controlled clinical trial comparing traditional versus reduced tidal volume ventilation in acute respiratory distress syndrome patients. Crit Care Med. 1999;27(8):1492–8.
Fanelli V, et al. Feasibility and safety of low-flow extracorporeal carbon dioxide removal to facilitate ultra-protective ventilation in patients with moderate acute respiratory distress sindrome. Crit Care. 2016;20:36.
Gattinoni L. Ultra-protective ventilation and hypoxemia. Crit Care. 2016;20(1):130.
Tobin MJ. Culmination of an era in research on the acute respiratory distress syndrome. N Engl J Med. 2000;342(18):1360–1.
Protti A, et al. Lung stress and strain during mechanical ventilation: any safe threshold? Am J Respir Crit Care Med. 2011;183(10):1354–62.
Protti A, et al. Lung anatomy, energy load, and ventilator-induced lung injury. Intensive Care Med Exp. 2015;3(1):34.
Australia, et al. Extracorporeal membrane oxygenation for 2009 influenza A(H1N1) acute respiratory distress syndrome. JAMA. 2009;302(17):1888–95.
Grasso S, et al. ECMO criteria for influenza A (H1N1)-associated ARDS: role of transpulmonary pressure. Intensive Care Med. 2012;38(3):395–403.
Retamal J, et al. Open lung approach ventilation abolishes the negative effects of respiratory rate in experimental lung injury. Acta Anaesthesiol Scand. 2016;60(8):1131–41.
Hotchkiss Jr JR, et al. Effects of decreased respiratory frequency on ventilator-induced lung injury. Am J Respir Crit Care Med. 2000;161(2 Pt 1):463–8.
Cressoni M, et al. Mechanical power and development of ventilator-induced lung injury. Anesthesiology. 2016;124(5):1100–8.
Dreyfuss D, Ricard JD, Gaudry S. Did studies on HFOV fail to improve ARDS survival because they did not decrease VILI? On the potential validity of a physiological concept enounced several decades ago. Intensive Care Med. 2015;41(12):2076–86.
Rich PB, et al. Effect of rate and inspiratory flow on ventilator-induced lung injury. J Trauma. 2000;49(5):903–11.
Maeda Y, et al. Effects of peak inspiratory flow on development of ventilator-induced lung injury in rabbits. Anesthesiology. 2004;101(3):722–8.
Garcia CS, et al. Pulmonary morphofunctional effects of mechanical ventilation with high inspiratory air flow. Crit Care Med. 2008;36(1):232–9.
Esteban A, et al. Characteristics and outcomes in adult patients receiving mechanical ventilation: a 28-day international study. JAMA. 2002;287(3):345–55.
Villar J, et al. The ALIEN study: incidence and outcome of acute respiratory distress syndrome in the era of lung protective ventilation. Intensive Care Med. 2011;37(12):1932–41.
Bellani G, et al. Epidemiology, patterns of care, and mortality for patients with acute respiratory distress syndrome in intensive care units in 50 countries. JAMA. 2016;315(8):788–800.
Mead J, Takishima T, Leith D. Stress distribution in lungs: a model of pulmonary elasticity. J Appl Physiol. 1970;28(5):596–608.
Cressoni M, et al. Lung inhomogeneity in patients with acute respiratory distress syndrome. Am J Respir Crit Care Med. 2014;189(2):149–58.
Gattinoni L, et al. Ventilator-related causes of lung injury: the mechanical power. Intensive Care Med. 2016;42(10):1567–75.
Marini JJ, Jaber S. Dynamic predictors of VILI risk: beyond the driving pressure. Intensive Care Med. 2016;42(10):1597–600.
Guldner A, et al. The authors reply. Crit Care Med. 2017;45(3):e328–9.
Goebel U, et al. Flow-controlled expiration: a novel ventilation mode to attenuate experimental porcine lung injury. Br J Anaesth. 2014;113(3):474–83.
Schumann S, et al. Determination of respiratory system mechanics during inspiration and expiration by FLow-controlled EXpiration (FLEX): a pilot study in anesthetized pigs. Minerva Anestesiol. 2014;80(1):19–28.
Gattinoni L, et al. Prone position in acute respiratory distress syndrome. Rationale, indications, and limits. Am J Respir Crit Care Med. 2013;188(11):1286–93.
Guerin C, et al. Prone positioning in severe acute respiratory distress syndrome. N Engl J Med. 2013;368(23):2159–68.
Agostoni E., Hyatt RE. Static behaviour of the respiratory system. In: Maklem PT, Mead J, Fishman AP. Handbook of physiology. MD: Bethesda; 1986. pp 113-130.
Brower RG, et al. Higher versus lower positive end-expiratory pressures in patients with the acute respiratory distress syndrome. N Engl J Med. 2004;351(4):327–36.
Meade MO, et al. Ventilation strategy using low tidal volumes, recruitment maneuvers, and high positive end-expiratory pressure for acute lung injury and acute respiratory distress syndrome: a randomized controlled trial. JAMA. 2008;299(6):637–45.
Briel M, et al. Higher vs lower positive end-expiratory pressure in patients with acute lung injury and acute respiratory distress syndrome: systematic review and meta-analysis. JAMA. 2010;303(9):865–73.
Institutional.
LG designed the review and drafted the manuscript. JJM helped draft the manuscript and revised it critically for important intellectual content. FC performed the literature search and helped draft the manuscript and design tables and figures. GM performed the literature search and helped draft the manuscript and design tables and figures. FR performed the literature search and helped draft the manuscript and design tables and figures. TT helped draft the manuscript and design tables and figures. FV performed the literature search and helped draft the manuscript and design tables and figures. MQ helped draft the manuscript and revised it critically for important intellectual content. All authors read and approved the final manuscript.
Department of Anesthesiology, Emergency and Intensive Care Medicine, University of Göttingen, Robert-Koch-Straße 40, 37075, Göttingen, Germany
Luciano Gattinoni, Francesca Collino, Giorgia Maiolo, Francesca Rapetti, Tommaso Tonetti, Francesco Vasques & Michael Quintel
University of Minnesota, Minneapolis/Saint Paul, MN, USA
John J. Marini
Luciano Gattinoni
Francesca Collino
Giorgia Maiolo
Francesca Rapetti
Tommaso Tonetti
Francesco Vasques
Michael Quintel
Correspondence to Luciano Gattinoni.
Gattinoni, L., Marini, J.J., Collino, F. et al. The future of mechanical ventilation: lessons from the present and the past. Crit Care 21, 183 (2017). https://doi.org/10.1186/s13054-017-1750-x
Mechanical power
The future of critical care | CommonCrawl |
\begin{document}
\title{A Class of Mean-Field Games with Optimal Stopping and its Inverse Problem}
\author{ Jianhui Huang, Tinghan Xie$^*$\thanks{J. Huang and T. Xie are with the Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong ([email protected]; [email protected]).} \thanks{The author acknowledges the financial support from RGC grant 15301119 and 15307621, and also is grateful to the helpful comments from Minyi Huang.}}
\maketitle
\begin{abstract}This paper revisits the well-studied \emph{optimal stopping} problem but within the \emph{large-population} framework. In particular, two classes of optimal stopping problems are formulated by taking into account the \emph{relative performance criteria}. It is remarkable the relative performance criteria, also understood by the \emph{Joneses preference}, \emph{habit formation utility}, or \emph{relative wealth concern} in economics and finance (e.g., \cite{Abel}, \cite{BC}, \cite{CK}, \cite{DeMarzo}, \cite{ET}, \cite{Gali}, etc.), plays an important role in explaining various decision behaviors such as price bubbles. By introducing such criteria in large-population setting, a given agent can compare his individual stopping rule with the average behaviors of its cohort. The associated mean-field games are formulated in order to derive the decentralized stopping rules. The related consistency conditions are characterized via some coupled equation system and the asymptotic Nash equilibrium properties are also verified. In addition, some \emph{inverse} mean-field optimal stopping problem is also introduced and discussed. \end{abstract}
\begin{IEEEkeywords} Consistent condition system; $\epsilon-$Nash equilibrium; Inverse optimal stopping; Mean-field optimal stopping; Relative performance \end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction} In recent years, the study of dynamic optimization of stochastic large-population system has attracted persistent and increasing research attentions. The agents (or players) in large population system are individually insignificant but collectively impose an important impact on each agent. In mathematical modeling, this feature can be characterized by the state-average coupling structure in the individual dynamics and cost functionals. The large-population systems can be found their applications in various different domains (e.g., engineering, social science, economics and finance, operational research and management, etc.). The interested readers may refer \cite{H10}, \cite{hcm07}, \cite{hmc06} and the reference therein for more details of the solid backgrounds of large-population system.
Regarding the controlled large-population system, it is infeasible for a given agent to collect the ``central" or ``global" information of all agents due to the dimensionality difficulty and complex interactions among the agents. Alternatively, it is more feasible and effective to study the decentralized strategies which depend on the local information only. By ``local information", we mean the optimal control regulator for a given agent, is designed on its own individual state and some quantity which can be computed in an off-line manner. Along this research line, one efficient method is the mean-field game (MFG for short) (e.g., \cite{LL07} or \cite{hcm07}) which fundamental idea is to approximate the initial large-population control problem by its limiting problem through some mean-field term (i.e., the asymptotic limit of state-average). As a consequence, one may design a set of decentralized strategies for the large but finite population system originally analyzed, where each agent needs only to know its own state information and mass-effect computed off-line. Furthermore, it is possible to verify the $\epsilon-$Nash equilibrium property for the derived decentralized strategies in which each individual agent will bear the optimality loss in the level $\epsilon$ depending on the population size. Some recent literature can be found in \cite{B12}, \cite{GLL10}, \cite{hcm07}, \cite{LZ08}, \cite{hhl}, for the study of mean-field games; \cite{HCM12} for cooperative social optimization; \cite{H10}, \cite{NH12} and \cite{NC13} and references therein for models with a major player, etc.
Our paper differs from the existing MFG literature in three main respects. (i) First, we investigate the large population system by its optimal stopping problems instead the already extensively studied optimal control problems. Undoubtedly, the optimal stopping problem itself is not new as it has already been explored in various different fields (for instance, the optimal investment with stopping decision from mathematical finance). Consequently, there exist considerable literature of optimal stopping and its applications. The interested readers are recommended to some monographs (e.g., \cite{o}, \cite{p}, \cite{s}, etc.) and the reference therein for more details. However, to our best knowledge, there has no formal discussion to optimal stopping problems pertinent to large-population framework. For instance, to discuss the behavior interaction of stopping rules and its effects imposed on market investors, as well as the resulting equilibrium. (ii) Second, we introduce the so-called ``relative performance" criteria into our stopping problems within large-population framework. Specially, we formulate and analyze two classes of optimal stopping problems in which a given agent takes into account the relative performance with other agents (in its ``ratio-habit" representation and convex combination). As a sequel, each individual agent aims to solve an optimal stopping problem which is interrelated to others via the terminated state-average of underlying large-population system. (iii) Third, we propose the \emph{inverse} mean-field optimal stopping problem where an additional manager agent is introduced which can optimally design the payoff functional such that the corresponding stopping rules adopted by all small agents will meet some preferred statistical properties. Recently, there arise a variety of papers concerning to mean-field stopping time but in different setups such as \cite{nutz} for mean-field stopping with continuum agents, and \cite{bgdr} for relaxed optimal stopping.
The introduction of \emph{relative performance} plays some significant role in our problem formulation and the analysis followed. Due to this reason, we would like to present few more words to illustrate its real meanings. First, the relative performance is extensively recognized in economics and decision analysis. For example, it is well documented in \cite{Gomez} that the agents in an economy will manifest substantial preferences exogenously defined over their own consumption as well as the contemporaneous average consumption of a reference group. Such reference group is also called the agent's countrymen which corresponds to the large-population in our setup, and these preferences are often termed in economics by ``\emph{keeping up with the Joneses}" (KUJ) (\cite{Abel}, \cite{CK}, \cite{Gali}, \cite{Gomez}). For short, the \emph{Joneses utility or preference}. Second, it is worth noting that the relative performance or relative wealth concern is also addressed in some optimal investment problem from mathematical finance (e.g. \cite{ET}). As stated in \cite{ET}, ``a return of $5\%$ during the crisis is not equivalent to the same return during a financial bubble". Therefore, it is necessary to study the interactions among all investors (agents) based on the simplified comparison of the performance of their competitors (peers). The intuition behind is that the human being tends to compare themselves to their peers (or, cohorts) and this effect is also supported by empirical studies in economics and social science. Third, the relative performance also arise naturally when considering the benchmark index tracking or the habit persistence preference. In particular, in case there exists some market systematic risk. More reference literature of relative performance can be found in \cite{Veblen} for its sociological part, and \cite{Abel}, \cite{BMW}, \cite{CMP}, \cite{DeMarzo}, \cite{Gali}, \cite{Gomez} for economic part in which the discrete-time model and framework are considered.
Motivated by the above discussion, we study the large population optimal stopping by considering the related \emph{relative performance}. Roughly speaking, we concentrate on a large population system with $N$ symmetric individual agents. These small agents execute stopping decisions by comparing the stopping rules applied by other peers. Specially, each agent takes into account a \emph{convex combination} of his stopping performance (with the convex weight $\theta \in (0, 1)$ and the relative concern to the average behaviors with weight $1-\theta$). In this way, there arises some natural interactions among agents and therefore leads to some large-population optimal stopping games. This corresponds to the economy in which the equilibrium consequences of an economy populated by agents with keeping up with the Joneses preferences. We are more interested to the asymptotic behavior when $N$ tends to infinity and in this case, the situation can be considerably simplified through the mean-field games (\cite{LL07}). Based on it, we can derive the decentralized stopping rules as well as the related consistency condition.
The rest of this paper is organized as follows. In Section \uppercase\expandafter{\romannumeral2}, we formulate two classes of mean-field optimal stopping problems in a large-population framework. The first class arises from the \emph{best time to sell} problem whereas the second class is related to the \emph{valuation of natural resources} problem. Both classes consider the \emph{relative performance} thus the individual agents are coupled via their payoff functionals. Section \uppercase\expandafter{\romannumeral3} aims to study the consistency condition or Nash certainty equivalence (NCE) system of the mean-field optimal stopping problems. Section \uppercase\expandafter{\romannumeral4} studies the relevant asymptotic $\epsilon-$Nash equilibrium property. Section \uppercase\expandafter{\romannumeral5} discusses the \emph{inverse mean-field optimal stopping} problem. Section \uppercase\expandafter{\romannumeral6} provides the conclusion of our work.
\section{Formulation}
We consider a stochastic large population system with $N$ negligible agents, denoted respectively by $\mathcal{A}_i, 1 \leq i \leq N. $ The state dynamics of all agents are given on a complete probability space $(\Omega, \mathcal F, \mathbb{P})$ on which a standard $N$-dimensional Brownian motion $\{W_i(t),\ 1\le i\leq N\}_{t \geq 0}$ is defined. We denote by $\{\mathcal{F}^{i}_t\}_{t \geq 0}$ the natural filtration generated by $i^{th}$ Brownian motion $W_i$ and $\mathcal{F}^{i}_0$ contains all $\mathbb{P}-$null sets of $\mathcal F.$ Consequently, $\mathcal F_t=\bigvee_{i=1}^N\mathcal F^{i}_t$ the full information of large population system up to time $t$. We denote $\mathcal{S}$ the set of all stopping times of the filtration $\{\mathcal{F}_t\}_{t \geq 0},$ $\mathcal{S}^{i}$ the set of all stopping times of the filtration $\{\mathcal{F}_t^{i}\}_{t \geq 0}.$ Hereafter, $\mathcal{T}=(\tau_1, \cdots, \tau_{N})$ represents the set of stopping strategies of all $N$ agents; $\tau_{-i}=(\tau_1, \cdots, \tau_{i-1},$ $\tau_{i+1}, \cdots \tau_{N})$ the stopping strategies except $\mathcal{A}_i$. Without regarding ordinal notation, we write $\mathcal{T}=(\tau_i, \tau_{-i}).$
\subsection{One standard optimal stopping problem} To start, we first recall the following well-studied optimal stopping problem for single agent which is named \emph{best time to sell}. The agent's state dynamics is given by a geometric Brownian motion (GBM): \begin{equation}\label{1e1}\left\{ \begin{aligned} &dx_i(t)=\alpha x_i(t)dt+\sigma x_i(t)dW_i(t),\\ &x_i(0)=x \end{aligned}\right.\end{equation}where $\alpha \in \mathbb{R}, \sigma>0$ are respectively the return and volatility rate. The initial endowment $x>0$. Suppose $x_i$ denotes the value process of an asset, and the owner of this asset may sell it at any time, but has to pay a fixed transaction fee $K>0.$ In this case, the owner's objective is to solve the following optimal stopping problem:\begin{equation}\nonumber \max_{\tau_i \in \mathcal{S}^{i}}J_i(\tau_i)=\max_{\tau_i \in \mathcal{S}^{i}}\mathbb{E}\left\{e^{-\beta\tau_i}(x_i(\tau_i)-K)\right\}. \end{equation}Here, $\beta>0$ is the discounted factor and the payoff (gain) functional can be rewritten as:\begin{equation}\label{1e2} J_i(\tau_i)=\mathbb{E}\left\{e^{-\beta\tau_i}x_i(\tau_i)-e^{-\beta\tau_i}K\right\}. \end{equation}Note that the transaction cost $K$ is fixed and should be same for all individual agents because it is assigned by the market regulator or planner. This observation provides the key linkage when we introduce the inverse mean-field optimal stopping problem later. Moreover, we assume all agents in our large population system are symmetric or statistical ``homogeneous" in that they own the same coefficients $(\alpha, \sigma, \beta, K)$.
\subsection{Relative performance in large-population system} First, we should note that the following utility and preference functional is introduced in \cite{Gomez}: \begin{equation}\nonumber u(c, C)=\frac{c^{1-\alpha}}{1-\alpha}C^{\gamma \alpha}\end{equation}where $c$ denotes the agents' individual consumption and $C$ denotes the consumption average or per capital consumption in given reference group; $\alpha>0$ is the constant relative risk-aversion (CRRA) coefficient; $0 \leq \gamma<1$ is called the ``\emph{Joneses parameter}" which increases the weight of average consumption per capita $C$ and makes the individual's marginal consumption more valuable since it helps individual agent to keep up with the peers. More insights can be gained if we further assume $\gamma=\frac{\alpha-1}{\alpha}$ for $\alpha>1$ thus \begin{equation}\label{ratiohabit} u(c, C)=\frac{(c/C)^{1-\alpha}}{1-\alpha}\end{equation}which becomes the standard ``\emph{ratio-habit}" representation proposed in \cite{Abel}. Moreover, $(1-\alpha)$ represents the average consumption elasticity of marginal utility. In addition, we remark \eqref{ratiohabit} specifies the the keeping up with the Joneses preference in exogenous way, and it provides one example of relative performance in economic literature.
Second, given the above standard optimal stopping problem $\eqref{1e1}$, $\eqref{1e2}$ and the relative performance from Joneses preference \eqref{ratiohabit}, as well as the striking large-population feature, it is natural to consider the relative performance based on $e^{-\beta\tau_i}x_i(\tau_i),$ the principal term in payoff functional \eqref{1e2}. As the response, we construct the following criteria term\begin{equation}\label{tax}\frac{e^{-\beta\tau_i}x_i(\tau_i)}{l_1+l_2\left(\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)\right)} \end{equation}which is the ratio between the individual discounted truncated state $e^{-\beta\tau_i}x_i(\tau_i)$ and some affine function upon the average $\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)$ for $l_1, l_2>0.$ Moreover, $l_2$ can stand for the degree of relative concern to keep us with their peers (as discussed in \cite{Gomez}).
\begin{remark}First, the criteria \eqref{tax} can be viewed as some modified version of ``ratio-habit" term \eqref{ratiohabit} by considering the optimal stopping outcome (note that $e^{-\beta\tau_i}x_i(\tau_i)$ denotes the discounted stopped state for single agent). Second, if let $l_1=0,$ \eqref{tax} can be connected to the Boltzmann-Gibbs distribution as follows:\begin{equation}\nonumber\begin{aligned} &\frac{e^{-\beta\tau_i}x_i(\tau_i)}{l_2\left(\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)\right)} =\underbrace{\frac{e^{-\beta\tau_i}x_i(\tau_i)}{\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)}}_{\text{Boltzmann-Gibbs distribution}}\cdot\underbrace{l_2^{-1}N}_{\text{Scaled large population size or market capacity}} \end{aligned}\end{equation}There are some literature to discuss the Boltzmann-Gibbs distribution in economics and wealth allocation such as the agent-based DSGE (dynamic stochastic general equilibrium) model (\cite{BM}) \end{remark}
\begin{remark} From panel data analysis in statistics, the average term $\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)$ in \eqref{tax} can be viewed as the cross-section data (or, longitudinal data) built on the truncated terminal wealth $\{x_j\}_{j=1}^{N}$ over all terminated times $\{\tau_j\}_{j=1}^{N}$. Note that different agents will execute different stopping times even though they may have the same distribution. Moreover, the affine function $l_1+l_2(\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j))$ can represent some market primitive. For example, the proportional (revenue) tax rate which consists of two parts: the tax basis $l_1>0$ and the surplus tax part which is monotonic with the industry average $\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)$ for $l_2>0.$ Therefore, the criteria \eqref{tax} actually represents some after-tax stopped value based on the tax numeraire $l_1+l_2(\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j))$. \end{remark}
\subsection{Convex combination of relative performance}
Now we present more details of the convex combination which is important to construct our functional. One example of convex functional can be found in is recent paper \cite{ET} where\begin{equation}\label{convex}\mathbb{E}U_i\left((1-\lambda)X^{i}_{T}+\lambda\left(X^{i}_{T} -\frac{1}{N}\sum_{j=1}^{N}X^{j}_{T}\right)\right)\end{equation}where $U_i$ is the utility function of investor $\mathcal{A}_i$ while $\frac{1}{N}\sum_{j=1}^{N}X^{j}_{T}$ is the average behavior of all investors. It can be understood by the investors' \emph{relative wealth concern} which may help us to explain some financial bubble and negative risk premium. First, agents may display ``external habit formation" (EHF) in their preferences. In this case, the utility of the investors depends on the wealth of their peers (the ``Joneses") and investors bias their portfolio holdings towards securities which are correlated with the wealth of their peers so as to ``keeping up with the Joneses". Second, relative wealth concerns may also arise endogenously, without assuming EHF preferences. \cite{DeMarzo} shows that individuals with standard preferences might care about the wealth of their peers because competition for non-diversifiable assets in limited supply drives their price up; if investors cannot compete in wealth with their peers they might be left out of the market.
Another supporting example of convex combination is given in \cite{NCMH} where the LQG mean-field game with leader-follower control is discussed. Specially, \cite{NCMH} introduces a convex combination of cost functional based on a trade-off between moving towards a common reference trajectory and keeping cohesion of the flock of leaders by also tracking their centroid: \begin{equation}\label{convex2} \phi^{L}(z^{L, N_{L}})(\cdot):=\lambda h(\cdot)+(1-\lambda)z^{L, N_{L}}(\cdot)\end{equation}where $\lambda \in (0,1)$ is the scalar convex index, $h(\cdot)$ is some reference trajectory while $z^{L, N_{L}}(\cdot):=\frac{1}{N_{L}}\sum_{i=1}^{N_{L}}z_{i}^{L}$ is the centroid of leader group.
\subsection{Optimal stopping of MFG with relative performance (I)}
Motivated by the above relative preference functional \eqref{ratiohabit}, \eqref{tax} and convex functional \eqref{convex}, \eqref{convex2}, we introduce the following optimal stopping problem with relative payoff functional: \begin{equation}\label{1e4} \max_{\tau_i,\tau_{-i} \in \mathcal{S}}\mathcal{J}_i(\tau_i,\tau_{-i})=\max_{\tau_i,\tau_{-i} \in \mathcal{S}}\mathbb{E}\left\{{C}_{\theta}(\tau_i,\tau_{-i})-e^{-\beta\tau_i}K\right\} \end{equation}where the convex combination of relative performance ${C}_{\theta}(\tau_i,\tau_{-i})$ is given by\begin{equation}\label{e6} {C}_{\theta}(\tau_i,\tau_{-i})=\theta \underbrace{e^{-\beta\tau_i}x_i(\tau_i)}_{\text{absolute performance}}+(1-\theta)\underbrace{\frac{e^{-\beta\tau_i}x_i(\tau_i)}{l_1+l_2\left(\frac{1}{N}\sum\limits_{j=1}^ Ne^{-\beta\tau_j}x_j(\tau_j)\right)}}_{\text{affine relative performance}}. \end{equation}In its full details, the payoff functional for agent $\mathcal{A}_i$ can be written as follows\begin{equation}\label{1e5} \mathcal{J}_i(\tau_i,\tau_{-i})=\mathbb{E}\left\{\theta e^{-\beta\tau_i}x_i(\tau_i)+(1-\theta)\frac{e^{-\beta\tau_i}x_i(\tau_i)}{l_1+l_2\left(\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)\right)}-e^{-\beta\tau_i}K\right\}. \end{equation}Here, $\theta\in[0,1]$ is the convex weight index. The stopping time $\tau_i \in \mathcal{S},$ the set of all stopping times of the filtration $\{\mathcal{F}_t\}_{t \geq 0}$. We write the above relative performance functional by $\mathcal{J}_i(\tau_i,\tau_{-i})$ to emphasize its dependence on both $\tau_i$ and $\tau_{-i}$ due to the weakly coupling structure in payoff functionals. More explanations to the convex combination are as follows. \begin{remark} (\romannumeral1) If $\theta=0$, \eqref{1e5} becomes \begin{equation}\nonumber \mathcal{J}_i(\tau_i,\tau_{-i})=\mathbb{E}\left\{ e^{-\beta\tau_i} \Bigg[ \frac{x_i(\tau_i)}{l_1+l_2\left(\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)\right)}-K\Bigg]\right\}. \end{equation} In this case, only the relative performance by comparing the selling values among all agents is considered.
(\romannumeral2) If $\theta=1$, \eqref{1e5} takes the following form\begin{equation}\nonumber \mathcal{J}_i(\tau_i,\tau_{-i})=\mathbb{E}\Bigg\{e^{-\beta\tau_i}\Big( x_i(\tau_i)-K\Big)\Bigg\} \end{equation} which is the standard performance criteria for classical \emph{best time to sell} problem. \end{remark}
By taking different values of $\theta$ in \eqref{1e5}, the individual agents can maximize the expectation of his trade-off between the classical criteria and the relative performance criteria. This functional is based on a balance between the absolute value of selling and the cohesion of other peers. Now, we formulate the following large-population optimal stopping problem. \\
\textbf{Problem (I)} Find a stopping strategy set $\mathcal{T}=(\tau_1, \cdots, \tau_{N})$ to maximize $\mathcal{J}_i(\tau_i,\tau_{-i})$ where $\tau_i \in \mathcal{S}^{i}$ for $1 \leq i \leq N.$\\
\begin{remark}Note that here we consider $\tau_i$ to be taken from $\mathcal{S}^{i}$, the set of all stopping times of the filtration $\{\mathcal{F}^{i}_t\}_{t \geq 0}$. In that case, we call $\tau_i$ the \emph{decentralized} stopping rules as it need only to be adapted to the filtration generated by $\mathcal{A}_i.$\end{remark}
\subsection{Optimal stopping of MFG with relative performance (II)}
Now, we present another optimal stopping problem arising from large population system. We still consider a geometric Brownian motion (GBM) for individual agent: \begin{equation}\nonumber\left\{ \begin{aligned} &dx_i(t)=\alpha x_i(t)dt+\sigma x_i(t)dW_i(t),\\ &x_i(0)=x. \end{aligned}\right.\end{equation}We consider a firm producing some natural resources (crude oil, natural gas, etc) with the market price process given by $x_i$. The running profit of this production is given by a nondecreasing function $f$ depending on the market price. The given firm may decide at any time to stop the production at a fixed constant cost $K$. Therefore, the real option value of the firm can be measured by the following optimal stopping problem:\begin{equation}\max_{\tau_i \in \mathcal{S}^{i}}{J}_i(\tau_i)=\max_{\tau_i \in \mathcal{S}^{i}}\mathbb{E}\left\{\int_0^{\tau_i} e^{-\beta t}f(x_i(t))dt-e^{-\beta\tau_i}K\right\}. \end{equation}We assume $f(\cdot)>0$ is nondecreasing and Lipschitz continuous function. Considering the relative performance, similar to \eqref{1e5}, we can introduce the following payoff functional \begin{equation}\label{1e8} \mathcal{J}_i(\tau_i,\tau_{-i})=\mathbb{E}\left\{{C}_{\theta}(\tau_i,\tau_{-i})-e^{-\beta\tau_i}K\right\} \end{equation}where$${C}_{\theta}(\tau_i,\tau_{-i})= \Bigg[\theta \int_0^{\tau_i} e^{-\beta t}f(x_i(t))dt+\frac{(1-\theta)\int_0^{\tau_i}e^{-\beta t}f(x_i(t))dt}{l_1+l_2\left(\frac{1}{N}\sum\limits_{j=1}^N \int_0^{\tau_j}e^{-\beta t}f(x_j(t))dt\right)}\Bigg].$$Now, we formulate the following large-population optimal stopping problem.\\
\textbf{Problem (II)} Find a stopping strategy set $\mathcal{T}=(\tau_1, \cdots, \tau_{N})$ to maximize $\mathcal{J}_i(\tau_i,\tau_{-i})$ where $\tau_i \in \mathcal{S}^{i}$ for $1 \leq i \leq N.$
\section{Consistency condition of mean-field optimal stopping problem}
\subsection{Consistency condition for Problem (I)} Now we first consider the consistency condition of Problem (I). Due to the payoff functional \eqref{1e5}, we introduce \begin{equation}\nonumber \widetilde{\theta}_1=\widetilde{\theta}_1(N, \mathcal{T}, \{x_j\}_{j=1}^{N}):=\theta+\frac{1-\theta}{l_1+l_2\left(\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)\right)}. \end{equation}As $\widetilde{\theta}_1\neq0$ thus we can define $K'_1=\frac{K}{\widetilde{\theta}_1}$. Then the payoff functional \eqref{1e5} becomes\begin{equation}\nonumber \mathcal{J}_i(\tau_i,\tau_{-i})=\mathbb{E}\Bigg\{\widetilde{\theta}_1e^{-\beta\tau_i}\Big(x_i(\tau_i)-K'_1\Big)\Bigg\}. \end{equation}Considering the decentralized optimal stopping $\tau_i \in \mathcal{S}^{i},$ and by law of large numbers, we have \begin{equation}\nonumber \lim_{N\rightarrow+\infty}\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j}x_j(\tau_j)=\mathbb{E}\Big(e^{-\beta\tau}x(\tau)\Big) \end{equation}for some $(\tau, x)$ following the same distribution with $(\tau_j, x_j)$ for $1\leq j\leq N$. Then we obtain\begin{equation}\label{e8}\left\{ \begin{aligned} &\bar{\theta}_1:=\lim_{N\rightarrow+\infty}\widetilde{\theta}_1=\theta+\frac{1-\theta}{l_1+l_2\mathbb{E}\Big(e^{-\beta\tau}x(\tau)\Big)},\\ &\bar{K}_1:=\lim_{N\rightarrow+\infty}K'_1=\frac{K}{\bar{\theta}_1}. \end{aligned}\right.\end{equation}Next, we get the following auxiliary mean-field optimal stopping functional \begin{equation}\label{e9} J_i(\tau_i)=\mathbb{E}\Bigg\{\bar{\theta}_1e^{-\beta\tau_i}\Big(x_i(\tau_i)-\bar{K}_1\Big)\Bigg\}=\bar{\theta}_1\mathbb{E}\Bigg\{e^{-\beta\tau_i}\Big(x_i(\tau_i) -\bar{K}_1\Big)\Bigg\} \end{equation} where $\bar{\theta}_1$ and $\bar{K}_1$ are defined in \eqref{e8}.
Now, we formulate the auxiliary \emph{limiting} mean-field optimal stopping problem for Problem (\textbf{I}) (For short, we denote it as (\textbf{LI})).\\
\textbf{Problem (LI)} Find a stopping strategy set $\mathcal{T}=(\tau_1, \cdots, \tau_{N})$ to maximize $J_i(\tau_i)$ for $1 \leq i \leq N$ where $\tau_i \in \mathcal{S}^{i}.$
Note that the maximization of $J_i(\tau_i)$ is equivalent to the maximization of $\mathbb{E}\{e^{-\beta\tau_i}(x_i(\tau_i) -\bar{K}_1)\}.$ Introduce the value function$$v(x):=\sup\limits_{\tau_i\in \mathcal{S}^{i}}\mathbb{E}\Bigg\{e^{-\beta\tau_i}\Big(x_i(\tau_i)-\bar{K}_1\Big)\Bigg\}.$$Then from the standard results (e.g., \cite{o}, \cite{p}, \cite{s}), we have \begin{equation}\nonumber \beta v(x)-\alpha x v'(x)-\frac{1}{2}\sigma^2x^2v''(x)=0, \end{equation} for $x<x^*$ and $v(x)$ should be represented by\begin{equation}\label{valuefunction1}v(x)=Ax^{k_1}+Bx^{k_2}\end{equation}for some $(A, B).$ Here, $(k_1, k_2)$ are solutions of the following quadratic equation: $$\frac{1}{2}\sigma^{2}k^{2}+(\alpha-\frac{1}{2}\sigma^{2})k-\beta=0.$$ It is checkable that\begin{equation}\label{k12}\left\{ \begin{aligned} &k_1=\frac{1}{2}-\frac{\alpha}{\sigma^2}-\sqrt{\Big(\frac{1}{2}-\frac{\alpha}{\sigma^2}\Big)^2+\frac{2\beta}{\sigma^2}}<0,\\ &k_2=\frac{1}{2}-\frac{\alpha}{\sigma^2}+\sqrt{\Big(\frac{1}{2}-\frac{\alpha}{\sigma^2}\Big)^2+\frac{2\beta}{\sigma^2}}>1. \end{aligned}\right.\end{equation}We assume $\beta >\alpha,$ then the \emph{decentralized} optimal stopping rule for $\mathcal{A}_i$ is characterized by \begin{equation}\label{e11} \tau_i^{*}=\inf\{t:\ x_i(t)\geq x^*\},\ \ x^*=\bar{K}_1\cdot\frac{k_2}{k_2-1}. \end{equation}Then we can reformulate as\begin{equation}\nonumber \left\{ \begin{aligned} &\tau_i^{*}=\inf\{t:\ W_i(t)\geq a't+b'\},\\ &a'=\frac{\sigma}{2}-\frac{\alpha}{\sigma}, \quad b'=\frac{1}{\sigma}\ln\left(\frac{x^*}{x}\right)=\frac{1}{\sigma}\ln\left(\frac{\bar{K}_1}{x}\cdot\frac{k_2}{k_2-1}\right). \end{aligned}\right.\end{equation} We can construct $\tau^{*} \sim \tau_i^{*}$ (with the same distribution) by \begin{equation}\label{e13} \tau^{*}=\inf\{t:\ W(t)\geq a't+b'\}. \end{equation}for some Brownian motion $W.$ For $\forall\ \lambda\in \mathbb{R},\ \mathbb{E}\Big(e^{\lambda W_\tau-\frac{1}{2}\lambda^2\tau}\Big)=1$ thus\begin{equation}\label{e15} \mathbb{E}\Big(e^{-\beta\tau^{*}}\Big)=e^{-\lambda b'} \end{equation}for $\lambda$ satisfying\begin{equation}\lambda^2-2a'\lambda-2\beta=0.\end{equation} Note that $\beta>0$ implies $\Delta=4(a')^2+8\beta>0$ thus the solution pair becomes\begin{equation}\left\{ \begin{aligned} &\lambda_+=a'+\sqrt{(a')^2+2\beta},\\ &\lambda_-=a'-\sqrt{(a')^2+2\beta}. \end{aligned}\right.\end{equation}Further it is checkable that $\lambda_-=k_1\sigma, \lambda_+=k_2\sigma.$ Notice the stopping rule \eqref{e11} and $x_i(0)=x$, so we assume $x<x^*$ thus $b'=\frac{1}{\sigma}\ln\left(\frac{x^*}{x}\right)>0$. By $\beta>0,\tau\geq 0$, we have\begin{equation} \mathbb{E}\Big(e^{-\beta\tau^{*}}\Big)=e^{-\lambda_{+} b'}. \end{equation}It can be calculated that \begin{equation}\nonumber l_1+l_2\Big(\frac{K}{\bar{\theta}_1}\cdot\frac{k_2}{k_2-1}\Big)^{1-\frac{\lambda_+}{\sigma}}x^{\frac{\lambda_{+}}{\sigma}}=\frac{1-\theta} {\bar{\theta}_1-\theta} \end{equation}thus we have\begin{equation}\nonumber l_1+l_2\Big(\frac{K}{\bar{\theta}_1}\cdot\frac{k_2}{k_2-1}\Big)^{1-k_2}x^{k_2}=\frac{1-\theta}{\bar{\theta}_1-\theta}. \end{equation}In particular, if $\theta=0, l_1=0$, \begin{equation}\label{e17} \bar{\theta}_1=x^{-1}l_2^{-\frac{1}{k_2}}\Big(\frac{Kk_2}{k_2-1}\Big)^{\frac{k_2-1}{k_2}}. \end{equation}When $0<\theta\leq1$, the equation \begin{equation}\nonumber l_2\bar{\theta}_1^{k_2-1}\Big(\frac{Kk_2}{k_2-1}\Big)^{1-k_2}x^{k_2}+l_1=\frac{1-\theta}{\bar{\theta}_1-\theta} \end{equation}becomes the consistency condition for $\bar{\theta}_1$. \begin{proposition}The consistent condition \begin{equation}\label{nce1} l_2\bar{\theta}_1^{k_2-1}\Big(\frac{Kk_2}{k_2-1}\Big)^{1-k_2}x^{k_2}+l_1=\frac{1-\theta}{\bar{\theta}_1-\theta} \end{equation}admits one unique solution. \end{proposition}
\begin{proof}LHS: note that $k_2>1,$ $f(\bar{\theta}_1):=l_2\bar{\theta}_1^{k_2-1}\Big(\frac{Kk_2}{k_2-1}\Big)^{1-k_2}x^{k_2}+l_1$ is an increasing function of $\bar{\theta}_1$ with $\lim\limits_{\bar{\theta}_1\longrightarrow +\infty}f(\bar{\theta}_1)=+\infty$ and $\lim\limits_{\bar{\theta}_1\longrightarrow \theta^{+}}f(\bar{\theta}_1)<+\infty$ RHS: note that $\bar{\theta}_1>\theta,$ and $g(\bar{\theta}_1):=\frac{1-\theta}{\bar{\theta}_1-\theta}$ is decreasing function of $\bar{\theta}_1$ with $\lim\limits_{\bar{\theta}_1\longrightarrow +\infty}g(\bar{\theta}_1)=0$ and $\lim\limits_{\bar{\theta}_1\longrightarrow \theta^{+}}g(\bar{\theta}_1)=+\infty$ Therefore, the consistency condition equation \eqref{nce1}: $f(\bar{\theta}_1)=g(\bar{\theta}_1)$ always admits one unique solution $\bar{\theta}_1$.\end{proof} Moreover, we can also calculate $v(x)$ as follows:\begin{equation}\nonumber\begin{aligned} v(x)&=\sup\limits_{\tau_i\in \mathcal{S}^{i}}\mathbb{E}\Bigg\{e^{-\beta\tau_i}\Big(x_i(\tau_i)-\bar{K}_1\Big)\Bigg\} =\mathbb{E}\Bigg\{e^{-\beta\tau_i^{*}}\Big(x_i(\tau_i^{*})-\bar{K}_1\Big)\Bigg\}\\& =\mathbb{E}\{e^{-\beta\tau_i^{*}}(x_i^{*}-\bar{K}_1)\}=(x_i^{*}-\bar{K}_1)\mathbb{E}\{e^{-\beta\tau_i^{*}}\} \\&=\frac{K}{\bar{\theta}_1}\cdot\frac{1}{k_2-1}e^{\lambda_{+}b'}=\frac{K}{\bar{\theta}_1}\cdot\frac{1}{k_2-1}e^{\frac{1}{\sigma}\lambda_{+}\ln\left(\frac{\bar{K}_1}{x}\frac{k_2}{k_2-1}\right)} \\&=\frac{1}{k_2}\left(\frac{K}{\bar{\theta}_1}\cdot \frac{k_2}{k_2-1}\right)^{1-k_2}x^{k_2}\\&=\frac{1}{k_2}\frac{1}{(x^{*})^{k_2-1}}x^{k_2}. \end{aligned}\end{equation}In other words, $A=0, B=\frac{1}{k_2}\frac{1}{(x^{*})^{k_2-1}}$ in value function representation \eqref{valuefunction1}. This result coincide with that of \cite{p}, pp. 106 but note that here $x^{*}$ actually depends on $\bar{\theta}_1$ which is determined by the NCE equation \eqref{nce1}.
\subsection{Consistency condition for Problem (II)}
Now we discuss the consistency condition of Problem (II). Denote\begin{equation}\nonumber \widetilde{\theta}_2=\widetilde{\theta}_2(N, \mathcal{T}, \{x_j\}_{j=1}^{N}):=\theta+\frac{1-\theta}{l_1+l_2\left(\frac{1}{N}\sum\limits_{j=1}^N\int_0^{\tau_j}e^{-\beta t}f(x_j(t))dt\right)}. \end{equation}By law of large numbers, we have \begin{equation}\nonumber \lim_{N\rightarrow+\infty}\frac{1}{N}\sum\limits_{j=1}^N\int_0^{\tau_j}e^{-\beta t}f(x_j(t))dt=\mathbb{E}\int_0^{\tau}e^{-\beta t}f(x(t))dt \end{equation} where $\tau$ follows the same distribution with $\tau_j$, $1\leq j\leq N$ and $x(\cdot)$ satisfies \begin{equation}\nonumber dx(t)=\alpha x(t)dt+\sigma x(t)dW(t),\ \ x(0)=x. \end{equation} Then we introduce \begin{equation}\label{E8}\begin{aligned} \bar{\theta}_2:=\lim_{N\rightarrow+\infty}\widetilde{\theta}_2=\theta+\frac{1-\theta}{l_1+l_2\left(\mathbb{E}\int_0^{\tau}e^{-\beta t}f(x(t))dt\right)}. \end{aligned}\end{equation} Define $\bar{K}_2:=\frac{K}{\bar{\theta}_2}.$ Then we get the auxiliary mean-field optimal stopping functional as \begin{equation}\label{E9} J_i(\tau_i)=\mathbb{E}\Bigg\{\bar{\theta}_2\int_0^{\tau_i}e^{-\beta t}f(x_i(t))dt-e^{-\beta\tau_i}K\Bigg\}=\bar{\theta}_2\mathbb{E}\Bigg\{\int_0^{\tau_i}e^{-\beta t}f(x_i(t))dt-e^{-\beta\tau_i}\bar{K}_2\Bigg\}. \end{equation}The maximization of $J_i(\tau_i)$ is equivalent to the maximization of $\mathbb{E}\Bigg\{\int_0^{\tau_i}e^{-\beta t}f(x_i(t))dt-e^{-\beta\tau_i}\bar{K}_2\Bigg\}$ as $\bar{\theta}_2>0.$ Now, we formulate the auxiliary limiting mean-field optimal stopping problem (for short, we denote it by (\textbf{LII})).\\
\textbf{Problem (LII)} Find a stopping strategies set $\mathcal{T}=(\tau_1, \cdots, \tau_{N})$ to maximize $J_i(\tau_i)$ for $1 \leq i \leq N$ where $\tau_i \in \mathcal{S}^{i}.$
Denote $$v(x):=\sup\limits_{\tau_i\in \mathcal{S}^{i}}\mathbb{E}\Bigg\{\int_0^{\tau_i}e^{-\beta t}f(x_i(t))dt-e^{-\beta\tau_i}\bar{K}_2\Bigg\}$$which satisfies \begin{equation}\nonumber \beta v(x)-\alpha x v'(x)-\frac{1}{2}\sigma^2x^2v''(x)-f(x)=0 \end{equation} for $x>x^*.$ The continuation region is specified by \begin{equation}\nonumber \tau_i^{*}=\inf\{t:\ x_i(t)\leq x^*\}. \end{equation}Define the resolvent operator$$\mathcal{R}_{\beta}f(x)=p(x):=\mathbb{E}\int_0^{+\infty}e^{-\beta t}f(x(t))dt.$$ We have the following representation$$v(x)=Ax^{k_1}+Bx^{k_2}+p(x)$$for $(k_1, k_2)$ are given by \eqref{k12}. Moreover, $B=0$ because the linear growth condition and the coefficient $A$ together with the cut-off level $x^{*}$ should be jointly determined by \begin{equation}\label{E12}\left\{\begin{aligned} &A(x^*)^{k_1}+p(x^*)=-\bar{K}_2,\\ &k_1A(x^*)^{k_1-1}+p'(x^*)=0. \end{aligned}\right.\end{equation} By choosing $\tau_i^{*}$, we obtain \begin{equation}\nonumber \begin{aligned} v(x)&=Ax^{k_1}+p(x)=\mathbb{E}\int_0^{\tau_i^{*}}e^{-\beta t}f(x_i(t))dt-\bar{K}_2\mathbb{E}\Big(e^{-\beta\tau_i^{*}}\Big). \end{aligned}\end{equation}Further \begin{equation}\nonumber \begin{aligned} \mathbb{E}\int_0^{\tau_i^{*}}e^{-\beta t}f(x_i(t))dt&=Ax^{k_1}+p(x)+\bar{K}_2\mathbb{E}\Big(e^{-\beta\tau_i^{*}}\Big). \end{aligned}\end{equation}We have \begin{equation}\label{E16} \tau_i^{*}=\inf\{t:\ W_i(t)\geq a't+b'\} \end{equation}where the cut-off condition is given by$$a'=\frac{\sigma}{2}-\frac{\alpha}{\sigma},\ \ b'=\frac{1}{\sigma}\ln\left(\frac{x^*}{x}\right).$$ Still, we have\begin{equation}\nonumber \mathbb{E}\Big(e^{-\beta\tau_i^{*}}\Big)=e^{-\lambda b'} \end{equation}for $\lambda a'-\frac{1}{2}\lambda^2=-\beta.$ Noting $x>x^*$, we have $b'=\frac{1}{\sigma}\ln\left(\frac{x^*}{x}\right)<0$. By $\beta>0,\tau\geq 0$, we just choose the negative root $$\lambda_-=a'-\sqrt{(a')^2+2\beta}.$$ Thus it follows $k_1=\frac{\lambda_-}{\sigma}<0$. We have\begin{equation}\nonumber \mathbb{E}\Big(e^{-\beta\tau_i^{*}}\Big)=e^{-\frac{\lambda_- }{\sigma}\ln\left(\frac{x^*}{x}\right)}=e^{-k_1}\ln\left(\frac{x^*}{x}\right)=\left(\frac{x^*}{x}\right)^{-k_1}. \end{equation} Therefore,\begin{equation}\label{E19} Ax^{k_1}+p(x)+\frac{K}{\bar{\theta}_2}\left(\frac{x^{*}}{x}\right)^{-k_1}=\frac{1}{l_2}\left[\frac{1-\theta}{\bar{\theta}_2-\theta}-l_1\right]. \end{equation} In summary, \eqref{E12} and \eqref{E19} are called the NCE consistency condition to $(A, x^{*}, \bar{\theta}_2)$. \begin{theorem} The NCE consistency condition system \begin{equation}\label{E20}\left\{ \begin{aligned} &A(x^*)^{k_1}+p(x^*)=-\frac{K}{\bar{\theta}_2},\\ &k_1A(x^*)^{k_1-1}+p'(x^*)=0,\\ &Ax^{k_1}+p(x)+\frac{K}{\bar{\theta}_2}\left(\frac{x^{*}}{x}\right)^{-k_1}=\frac{1}{l_2}\left[\frac{1-\theta}{\bar{\theta}_2-\theta}-l_1\right] \end{aligned}\right.\end{equation}admits one unique solution $(A, x^{*}, \bar{\theta}_2)$ where$$p(x):=\mathbb{E}\int_0^{+\infty}e^{-\beta t}f(x(t))dt$$ \end{theorem} \begin{proof}From first equation of \eqref{E20}, we have $\bar{\theta}_2=-\frac{K}{A(x^{*})^{k_1}+p(x^{*})}$ thus \begin{equation}\nonumber Ax^{k_1}+p(x)-(A(x^{*})^{k_1}+p(x^{*}))\left(\frac{x^{*}}{x}\right)^{-k_1}=\frac{1}{l_2}\left[\frac{1-\theta}{-\left(\frac{K}{A(x^{*})^{k_1}+p(x^{*})}+\theta\right)}-l_1\right]. \end{equation}Also, from the second equation of \eqref{E20}, we have $A=-\frac{p'(x^{*})}{k_1(x^{*})^{k_1-1}}$ therefore \begin{equation}\begin{aligned}\nonumber &-\frac{p'(x^{*})}{k_1(x^{*})^{k_1-1}}x^{k_1}+p(x)-\left(-\frac{p'(x^{*})}{k_1(x^{*})^{k_1-1}}(x^{*})^{k_1}+p(x^{*})\right)\left(\frac{x^{*}}{x}\right)^{-k_1}\\ &=\frac{1}{l_2}\left[\frac{1-\theta}{-\left(\frac{K}{-\frac{p'(x^{*})}{k_1(x^{*})^{k_1-1}}(x^{*})^{k_1}+p(x^{*})}+\theta\right)}-l_1\right]. \end{aligned}\end{equation}Rearrange the above terms and noting that $x \neq 0,$ we have\begin{equation}\begin{aligned}\nonumber &\frac{p(x)}{x^{k_1}}-\frac{p(x^{*})}{(x^{*})^{k_1}}=-\frac{1}{l_2}\frac{(1-\theta)}{\left(\frac{Kk_1}{k_1p(x^{*})-p'(x^{*})x^{*}}+\theta\right)}-\frac{l_1}{l_2}. \end{aligned}\end{equation}Note that $x, l_1, l_2$ are known, thus we get the following equation of $x^{*}$ only:\begin{equation}\begin{aligned} &-\left(\frac{p(x)}{x^{k_1}}+\frac{l_1}{l_2}\right)+\frac{p(x^{*})}{(x^{*})^{k_1}}=\frac{1}{l_2}\frac{(1-\theta)}{\left(\frac{Kk_1}{k_1p(x^{*})-p'(x^{*})x^{*}}+\theta\right)}. \end{aligned}\end{equation}We have\begin{equation}\left\{\begin{aligned} &A=-\frac{p'(x^{*})}{k_1(x^{*})^{k_1-1}},\\ &\bar{\theta}_2=-\frac{K}{A(x^{*})^{k_1}+p(x^{*})}=\frac{Kk_1}{k_1p(x^{*})-p'(x^{*})x^{*}}.\\ \end{aligned}\right.\end{equation} \end{proof}
\section{Asymptotic analysis of $\epsilon-$Nash equilibrium}We discuss the asymptotic near-optimal property of Problem (I). To start, we first introduce the $\epsilon-$Nash equilibrium property to our mean-field optimal stopping problem. \begin{definition}A set of stopping strategies $\mathcal{T}=(\tau_1, \cdots, \tau_{N})$ for agents $\{\mathcal{A}_i\}_{1 \leq i \leq N},$ is called an $\epsilon-$Nash equilibrium with respect to the payoff functionals $\mathcal{J}_i, 1 \leq i \leq N,$ if there exists $\epsilon>0$ such that for any fixed $1 \leq i \leq N,$ it satisfies that \begin{equation} \mathcal{J}_{i}(\hat{\tau}_i, \tau_{-i})-\epsilon \leq \mathcal{J}_{i}(\tau_i, \tau_{-i})\end{equation}when any alternative optimal stopping $\hat{\tau}_i \in \mathcal{S}^{i}$ is applied by the agent $\mathcal{A}_i.$ \end{definition}Note that here $\mathcal{S}^{i}$ is the set of all stopping times of the filtration $\{\mathcal{F}^{i}_t\}_{t \geq 0}.$ \begin{lemma}For any $\tau \in \mathcal{S},$ we have $\mathbb{E}(e^{-\beta \tau} x_{\tau})^{2}<+\infty.$ \end{lemma} \begin{proof}In case $\beta>\alpha+\frac{\sigma^{2}}{2},$ we have\begin{equation}\nonumber\begin{aligned} &\mathbb{E}(\hat{y}_i)^{2}=\mathbb{E}\left(e^{-\beta \hat{\tau}_i} x_i(\hat{\tau}_i)\right)^{2}=\mathbb{E}\left(e^{-2\beta \hat{\tau}_i} \cdot e^{2(\alpha-\frac{1}{2}\sigma^{2})\hat{\tau}_i+2\sigma W_{i}(\hat{\tau}_i)}\right)\\ &=\mathbb{E}\left(e^{2\sigma W_{i}(\hat{\tau}_i)-2\sigma^{2}\hat{\tau}_i} \cdot e^{\left(\sigma^{2}+2(\alpha-\beta)\right)\hat{\tau}_i}\right)\\ & \leq \mathbb{E}\left(e^{2\sigma W_{i}(\hat{\tau}_i)-2\sigma^{2}\hat{\tau}_i}\right)=1<+\infty. \end{aligned} \end{equation}In case $\alpha<\beta<\alpha+\frac{\sigma^{2}}{2},$ we have\begin{equation}\nonumber\begin{aligned} &\mathbb{E}(\hat{y}_i)^{2}=\mathbb{E}\left(e^{-\beta \hat{\tau}_i} x_i(\hat{\tau}_i)\right)^{2}=\mathbb{E}\left(e^{-2\beta \hat{\tau}_i} \cdot e^{2(\alpha-\frac{1}{2}\sigma^{2})\hat{\tau}_i+2\sigma W_{i}(\hat{\tau}_i)}\right)\\ &=\mathbb{E}\left(e^{2\sigma W_{i}(\hat{\tau}_i)-4\sigma^{2}\hat{\tau}_i} \cdot e^{\left(4\sigma^{2}-2\beta+2(\alpha-\frac{1}{2}\sigma^{2})\right)\hat{\tau}_i}\right)\\ & \leq \sqrt{\mathbb{E}\left(e^{2\sigma W_{i}(\hat{\tau}_i)-4\sigma^{2}\hat{\tau}_i}\right)^{2}}\cdot \sqrt{\mathbb{E}e^{2\left(4\sigma^{2}-2\beta+2(\alpha-\frac{1}{2}\sigma^{2})\right)\hat{\tau}_i}}\\& \leq \sqrt{\mathbb{E}e^{\left(4(\alpha-\beta)+6\sigma^{2}\right)\hat{\tau}_i}}<+\infty. \end{aligned} \end{equation}\end{proof} \begin{theorem}The optimal stopping strategy set $\mathcal{T}^{*}=(\tau_1^{*}, \cdots, \tau_{N}^{*})$ is an $\epsilon-$Nash equilibrium where $\tau_i^{*}=\inf\{t: x_i(t) \geq \bar{K}_1 \cdot \frac{k_2}{k_2-1}\}$ and $\bar{K}_1=\frac{K}{\bar{\theta}_1}$ is determined via the NCE equation \eqref{nce1}. \end{theorem}
\begin{proof}For sake of presentation, we introduce the following notations:\begin{equation}\left\{ \begin{aligned}\nonumber &y_i(\tau_i):=e^{-\beta\tau_i}x_i(\tau_i), \quad y_{-i}(\tau_{-i}):=\frac{1}{N}\sum\limits_{j=1,j\neq i}^Ne^{-\beta\tau_j}x_j(\tau_j),\\ &y_i^{*}(\tau_i^{*}):=e^{-\beta\tau_i^{*}}x_i(\tau_i^{*}), \quad y_{-i}^{*}(\tau_{-i}^{*}):=\frac{1}{N}\sum\limits_{j=1,j\neq i}^Ne^{-\beta\tau_j^{*}}x_j(\tau_j^{*}). \end{aligned}\right.\end{equation}When all agents apply the optimal stopping strategies $\mathcal{T}^{*}=\{\tau_1^{*}, \cdots, \tau_N^{*}\},$ the payoff functional in Problem (\textbf{I}) becomes
\begin{equation}\nonumber\begin{aligned} &\mathcal{J}_i(\tau_i^{*},\tau_{-i}^{*})\\ =&\mathbb{E}\left\{\Bigg[\theta e^{-\beta\tau_i^{*}}x_i(\tau_i^{*})+(1-\theta)\frac{e^{-\beta\tau_i^{*}}x_i(\tau_i^{*})} {l_1+l_2\left(\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j^{*}}x_j(\tau_j^{*})\right)}\Bigg]-e^{-\beta\tau_i^{*}}K\right\}\\ =&E\left\{e^{-\beta\tau_i^{*}}x_i(\tau_i^{*})\widetilde{\theta}_1-e^{-\beta\tau_i^{*}}K\right\} \end{aligned} \end{equation}where\begin{equation}\begin{aligned}&\widetilde{\theta}_1:=\theta+\frac{(1-\theta)}{l_1+l_2\left(\frac{1}{N}\sum\limits_{j=1}^Ne^{-\beta\tau_j^{*}}x_j(\tau_j^{*})\right)}\\&=\theta +\frac{(1-\theta)}{l_1+\frac{l_2}{N}e^{-\beta\tau_i^{*}}x_i(\tau_i^{*})+\frac{l_2}{N}\sum\limits_{j=1,j\neq i}^Ne^{-\beta\tau_j^{*}}x_j(\tau_j^{*})}\end{aligned}\end{equation}Therefore, we have\begin{equation}\widetilde{\theta}_1=\widetilde{\theta}_1(y_i^{*}, y_{-i}^{*})=\theta+\frac{(1-\theta)}{l_1+\frac{l_2y_i^{*}}{N}+l_2y_{-i}^{*}}\end{equation}Therefore,\begin{equation} \mathcal{J}_i(\tau_i^{*},\tau_{-i}^{*})=\mathbb{E}\left\{y_i^{*}\widetilde{\theta}_1(y_i^{*}, y_{-i}^{*})-e^{-\beta\tau_i^{*}}K\right\}. \end{equation}On the other hand, the limiting gain functional\begin{equation}J_{i}(\tau_i^{*})=\mathbb{E}\{\bar{\theta}_1y_i^{*}-e^{-\beta \tau_i^{*}}K\}\end{equation}where
$\bar{\theta}_1:=\theta+\frac{1-\theta}{l_1+l_2\mathbb{E}\Big(e^{-\beta\tau}x(\tau)\Big)}$ is determined through the consistency condition \eqref{nce1}. When all the agents apply the decentralized optimal stopping rules $\mathcal{T}^{*},$ applying Cauchy-Schwarz inequality:\begin{equation}\begin{aligned}&|\mathcal{J}_i(\tau_i^{*}, \tau_{-i}^{*})-J_{i}(\tau_i^{*})|=|\mathbb{E}\left[y_i^{*}\left(\widetilde{\theta}_1(y_i^{*}, y_{-i}^{*})-\bar{\theta}_1\right)\right]|\\&=(1-\theta)\mathbb{E}\left[y_i^{*}\left(\frac{1}{l_1+ l_2\frac{y_i^{*}}{N}+l_2y_{-i}^{*}}-\frac{1}{l_1+l_2\mathbb{E}\Big(e^{-\beta\tau}x(\tau)\Big)}\right)\right]\\& \leq (1-\theta)\left(\mathbb{E}(y_i^{*})^{2}\right)^{\frac{1}{2}}\cdot \left(\mathbb{E}\frac{( l_2\frac{y_i^{*}}{N}+l_2\left(y_{-i}^{*}-\mathbb{E}(e^{-\beta\tau}x(\tau))\right)^{2}}{(l_1+ l_2\frac{y_i^{*}}{N}+l_2y_{-i}^{*})^{2}\big(l_1+l_2\mathbb{E}(e^{-\beta\tau}x(\tau))\big)^{2}}\right)^{\frac{1}{2}}\\ &\leq \frac{2(1-\theta)}{l_1^{2}}\left(\mathbb{E}(y_i^{*})^{2}\right)^{\frac{1}{2}}\cdot \left(\frac{l_2^{2}}{N^{2}}\mathbb{E}(y_i^{*})^{2}+l_2^{2}\mathbb{E}(y_{-i}^{*}-\mathbb{E}(e^{-\beta\tau}x(\tau)))^{2}\right)^{\frac{1}{2}} \\&=O\left(\frac{1}{\sqrt{N}}\right)\end{aligned}\end{equation}Here, note that\begin{equation}\begin{aligned}&\mathbb{E}(y_i^{*})^{2}=\mathbb{E}(e^{-\beta\tau^{*}_i}x_i(\tau^{*}_i))^{2} =(x^{*})^{2}\mathbb{E}(e^{-2\beta\tau^{*}_i})=(x^{*})^{2}e^{-\left(a'+\sqrt{(a')^{2}+4\beta}\right)b'}\\ &=(x^{*})^{2}e^{-\left(a'+\sqrt{(a')^{2}+4\beta}\right)\frac{1}{\sigma}\ln \left(\frac{K}{\bar{\theta}_1}\frac{1}{x}\frac{k_2}{k_2-1}\right)} \\&=\left(\frac{K}{\bar{\theta}_1}\frac{1}{x}\frac{k_2}{k_2-1}\right)^{-\frac{a'+\sqrt{(a')^{2}+4\beta}}{\sigma}}<+\infty.\end{aligned}\end{equation}When any alternative optimal stopping $\hat{\tau}_i \in \mathcal{S}$ is applied by $\mathcal{A}_i,$ then\begin{equation}\nonumber\begin{aligned} &\mathcal{J}_i(\hat{\tau}_i,\tau_{-i}^{*})\\ =&\mathbb{E}\left\{\Bigg[\theta e^{-\beta\hat{\tau}_i}x_i(\hat{\tau}_i)+(1-\theta)\frac{e^{-\beta\hat{\tau}_i}x_i(\hat{\tau}_i)} {l_1+l_2\left(\frac{1}{N}e^{-\beta\hat{\tau}_i}x_i(\hat{\tau}_i)+\frac{1}{N}\sum\limits_{j=1, j\neq i}^Ne^{-\beta\tau_j^{*}}x_j(\tau_j^{*})\right)}\Bigg]-e^{-\beta\hat{\tau}_i}K\right\}\\ =&E\left\{e^{-\beta\hat{\tau}_i}x_i(\hat{\tau}_i)\widetilde{\theta}-e^{-\beta\hat{\tau}_i}K\right\}. \end{aligned} \end{equation}Here, \begin{equation}\widetilde{\theta}(\hat{y}_i, y^{*}_{-i})=\theta +\frac{(1-\theta)}{l_1+\frac{l_2}{N}e^{-\beta\hat{\tau}_i}x_i(\hat{\tau}_i)+\frac{l_2}{N}\sum\limits_{j=1,j\neq i}^Ne^{-\beta\tau_j^{*}}x_j(\tau_j^{*})}.\end{equation}On the other hand, \begin{equation}J_{i}(\hat{\tau}_i)=\mathbb{E}\{\bar{\theta}_1\hat{y}_i-e^{-\beta \hat{\tau}_i}K\}.\end{equation}Applying Cauchy-Schwarz inequality, we have \begin{equation}\nonumber\begin{aligned}
&|\mathcal{J}_i(\hat{\tau}_i, \tau_{-i}^{*})-J_{i}(\hat{\tau}_i)|=|\mathbb{E}\left[\hat{y}_i\left(\widetilde{\theta}(\hat{y}_i, y_{-i}^{*})-\bar{\theta}\right)\right]|\\ =&(1-\theta)\mathbb{E}\left[\hat{y}_i\left(\frac{1}{l_1+ l_2\frac{\hat{y}_i}{N}+l_2y_{-i}^{*}}-\frac{1}{l_1+l_2\mathbb{E}\Big(e^{-\beta\tau}x(\tau)\Big)}\right)\right]\\ \leq &(1-\theta)\left(\mathbb{E}(\hat{y}_i)^{2}\right)^{\frac{1}{2}}\cdot \mathbb{E}\left(\frac{l_2\frac{\hat{y}_i}{N}+l_2\big(y_{-i}^{*}-\mathbb{E}(e^{-\beta\tau}x(\tau))\Big)}{(l_1+ l_2\frac{\hat{y}_i}{N}+l_2y_{-i}^{*})(l_1+l_2\mathbb{E}\Big(e^{-\beta\tau}x(\tau)\Big))}^{2}\right]^{\frac{1}{2}}\\\leq & \frac{2(1-\theta)}{l_1^{2}}\left(\mathbb{E}(\hat{y}_i)^{2}\right)^{\frac{1}{2}}\cdot \left(\frac{l_2^{2}}{N^{2}}\mathbb{E}(\hat{y}_i)^{2}+l_2^{2}\mathbb{E}(y_{-i}^{*}-\mathbb{E}(e^{-\beta\tau}x(\tau)))^{2}\right)^{\frac{1}{2}} \\=&O\left(\frac{1}{\sqrt{N}}\right) \end{aligned} \end{equation}Note that by Lemma 4.1, we have $\mathbb{E}(\hat{y}_i)^{2}<+\infty.$ Hence he result. \end{proof}
\section{Inverse Mean Field Optimal Stopping Problem} In this section, we turn to study the \emph{inverse} mean-field optimal stopping problem of large population system. Note that the inverse stopping problem in non-large-population setup is already well addressed (\cite{Kruse}) but to our best knowledge, no similar work was carried out in its large population setup. Its main idea can be sketched as follows. We first introduce a market manager, denoted by $\mathcal{A}_0$, which can be interpreted as the supervisory authority, market regulator, or local government (say, the tax or revenue bureau). Unlike the individual negligible agents $\{\mathcal{A}_i\}_{i=1}^{N}$, the manager $\mathcal{A}_0$ has the right to construct or design the gain functional in our optimal stopping problem. One real example is the transaction fee $K$ introduced in \eqref{1e5}, which should be charged by the market organizer or local government (that is $\mathcal{A}_0$). Also, to great extent, the level of transaction fee should be designed or settled down subject to the discretion of $\mathcal{A}_0$. Given the transaction fee $K$ assigned by $\mathcal{A}_0,$ the small agents $\{\mathcal{A}_i\}_{i=1}^{N}$ in large population system naturally give their best stopping response $\{\tau_i^{*}\}_{i=1}^{N}$ to own individual optimal stopping problems, as we discussed in Section II and III. As a sequel, it generates an empirical distribution function
${F}_{N}(t):=\frac{1}{N}\sum_{i=1}^{N}1_{[0, t]}(\tau_i^{*})$ by all stopping times adopted by $\{\mathcal{A}_i\}_{i=1}^{N}$. By the Glivenko-Cantelli theorem, we have (see \cite{Vaart})$$||{F}_{N}-F||_{\infty}=\sup_{0 \leq t \leq +\infty}|{F}_{N}(t)-F(t)|\longrightarrow 0 \quad a.s. \quad N \longrightarrow +\infty$$where $F$ is the distribution function of $\tau_i^{*}, 1 \leq i \leq N.$ Recall the cutoff level of individual optimal stopping $\tau_i^{*}$ depends on $K$ thus $F_{N}, F$ are actually functions of $K$.
As the market organizer or regulator, $\mathcal{A}_0$ does not concern the individual stopping time applied by a particular agent. Instead, $\mathcal{A}_0$ is more interested to carefully design the transaction level $K$ such that the resulting empirical distribution $F_{N}$ or $F$ from all stopping times $\{\tau_i^{*}\}_{i=1}^{N}$ should possess some preferred statistical properties. In other words, the market manager is more concerned to the group stopping behavior by the large population system instead a given individual agent. For example, in some cases, the market organizer prefers the agents $\{\mathcal{A}_i\}_{i=1}^{N}$ will not close their business (or, sell their assets) all together in the same time thus it hopes to maximize the variance of empirical process by all stopping times. In other words, to maximize the empirical variance related to the distribution function $F_{N}$, or the variance to $F$ when considering the asymptotic property by $N\longrightarrow +\infty.$ To gain more insights, let us focus on the framework of \emph{best time to sell}, as addressed in Section (II). It can also be understood by the \emph{best time to close business}. The values of individual assets or business firms are given by\begin{equation}\nonumber \left\{ \begin{aligned} &dx_i(t)=\alpha x_i(t)dt+\sigma x_i(t)dW_i(t),\\ &x_i(0)=x. \end{aligned}\right.\end{equation}The individual gain or payoff functionals are:\begin{equation}\label{5e1} \mathcal{J}_i(\tau_i,\tau_{-i})=\mathbb{E}\left\{\frac{ e^{-\beta\tau_i}x_i(\tau_i)}{\frac{1}{N}\sum\limits_{j=1, j \neq i}^Ne^{-\beta\tau_j}x_j(\tau_j)}- e^{-\beta\tau_i}K\right\}. \end{equation}\begin{remark}The functional \eqref{5e1} can be viewed as a special case of the payoff functional \eqref{1e5} given in Problem (I) by setting $\theta=0, l_1=0, l_2=1.$ Moreover, in \eqref{5e1}, the state-average excludes the individual state of $\mathcal{A}_i.$ Similar functional but arising in linear-quadratic mean-field game can be found in \cite{hcm07}. We focus on the functional \eqref{5e1} here mainly because we aim to get more explicit result. It is remarkable that our analysis below can also be extended to more general functional but without explicit solutions.\end{remark}
Based on \eqref{nce1}, the consistent condition in present case \eqref{5e1} takes more explicit representation:$$\bar{\theta}_1=x^{-1}\Big(\frac{Kk_2}{k_2-1}\Big)^{1-\frac{\sigma}{\lambda_+}}=x^{-1}\Big(\frac{Kk_2}{k_2-1}\Big)^{1-\frac{1}{k_2}}.$$ For sake of presentation, we repeat the individual optimal stopping rules as follows: $\tau_i^{*} \sim \tau$ where\begin{equation}\label{e13} \left\{ \begin{aligned} &\tau=\inf\{t:\ W(t)\geq a't+b'\},\\ &a'=\frac{\sigma}{2}-\frac{\alpha}{\sigma}, \quad b'=\frac{1}{\sigma}\ln\left(\frac{K}{\bar{\theta}_1x}\cdot\frac{k_2}{k_2-1}\right)=\frac{1}{\sigma}\ln\left(\frac{Kk_2}{k_2-1}\right)^{\frac{1}{k_2}}>0. \end{aligned}\right.\end{equation}Note that $k_2>1$ thus $b'>0$ implies $\frac{Kk_2}{k_2-1}>1.$ That is, $K>\frac{k_2-1}{k_2}.$ Moreover, we have$$\mathbb{E}\Big(e^{(\lambda a'-\frac{1}{2}\lambda^2)\tau}\Big)=e^{-\lambda b'}.$$ Set $\lambda a'-\frac{1}{2}\lambda^2=t$, and we have\begin{equation}\label{e15} M_{\tau}(t):=\mathbb{E}\Big(e^{t\tau}\Big)=e^{-\lambda b'}=e^{-\frac{\lambda}{\sigma}\ln\left(\frac{K k_2}{k_2-1}\right)^{\frac{1}{k_2}}} =\left(\frac{Kk_2}{k_2-1}\right)^{-\frac{\lambda_+(t)}{k_2\sigma}} \end{equation}where\begin{equation}\nonumber\left\{ \begin{aligned} & \lambda_{+}(t)=\frac{\sigma}{2}-\frac{\alpha}{\sigma}+\sqrt{\Big(\frac{\sigma}{2}-\frac{\alpha}{\sigma}\Big)^2-2t},\\ &\lambda_{-}(t)=\frac{\sigma}{2}-\frac{\alpha}{\sigma}-\sqrt{\Big(\frac{\sigma}{2}-\frac{\alpha}{\sigma}\Big)^2-2t}. \end{aligned}\right.\end{equation}Further, we have the following basic computations: $$M_{\tau}'(t)=M_{\tau}(t)\cdot \ln \left(\frac{Kk_2}{k_2-1}\right)\cdot \frac{1}{k_2\sigma\sqrt{\Big(\frac{\sigma}{2}-\frac{\alpha}{\sigma}\Big)^2-2t}}.$$ It follows that\begin{equation}\nonumber\mathbb{E}(\tau)=M_{\tau}'(0)=\left(-\frac{1}{a'k_2\sigma}\right)\ln \left(\frac{Kk_2}{k_2-1}\right)\end{equation}which is an increasing function for $K > \frac{k_2-1}{k_2}.$ In fact, if $K=\frac{k_2-1}{k_2},$ then $x=x^{*},$ and the agents should sell their assets or close their business immediately at $t=0.$ Further, we have the following computations.\begin{equation}\nonumber\begin{aligned} M_{\tau}''(t)&=M_{\tau}(t)\left[\ln \left(\frac{Kk_2}{k_2-1}\right)\cdot\frac{1}{k_2\sigma}\cdot \frac{1}{\sqrt{(a')^{2}-2t}}\right]^{2}\\ &+M_{\tau}(t)\left[\ln \left(\frac{Kk_2}{k_2-1}\right)\cdot\frac{1}{k_2\sigma}\cdot \frac{1}{(\sqrt{(a')^{2}-2t})^{3}}\right]. \end{aligned} \end{equation}Therefore,\begin{equation}\nonumber\begin{aligned} &\mathbb{E}\tau^{2}=M_{\tau}''(0)=\left[\ln \left(\frac{Kk_2}{k_2-1}\right)\cdot \left(\frac{1}{k_2\sigma}\right)\cdot \frac{1}{-a'}\right]^{2}-\ln \left(\frac{Kk_2}{k_2-1}\right)\cdot \left(\frac{1}{k_2\sigma}\right)\cdot \frac{1}{(a')^3}\\ \Longrightarrow &\text{Var}(\tau)=\mathbb{E}\tau^{2}-(\mathbb{E}\tau)^{2}=-\ln \left(\frac{Kk_2}{k_2-1}\right)\cdot \left(\frac{1}{k_2\sigma(a')^3}\right)\end{aligned}\end{equation}Actually, $\tau$ should follow the inverse Gaussian distribution $IG(\mu, \varrho)$ where\begin{equation}\nonumber\left\{ \begin{aligned} & \mu=\left(-\frac{1}{a'k_2\sigma}\right)\ln \left(\frac{Kk_2}{k_2-1}\right)>0,\\ &\varrho=\left[\frac{1}{k_2\sigma}\ln \frac{Kk_2}{k_2-1}\right]^{2}>0. \end{aligned}\right.\end{equation}It follows that the expectation and variance are both increasing functions of $K.$
\subsection{Target expectation or variance}
We first present a rather simple case which we called ``\emph{target expectation or variance}." In this case, the market manager $\mathcal{A}_0$ aims to push or steer the empirical distribution $F_{N}$, or its limit $F$ to reach some given level in expectation or variance. Its economic interpretation is as follows: in some case, the market manager hopes the given small agents or firms will close their business with some prescribed average time, or some given variance. The target variance implies these small firms or agents in given industry will not close their business or sell their assets in too concentrated way (i.e. small variance) or dispersive way (i.e. large variance). To this problem, we have the following result.
\begin{proposition}To have $\mathbb{E}\tau=\int_{t=0}^{+\infty} t dF(t)=\mu_0>0,$ the transaction fee should be set by $K_{\mu_{0}}=\left(\frac{k_2-1}{k_2}\right)e^{-a'\mu_0k_2\sigma}$.\end{proposition}
\begin{proposition}To have $ \text{Var}(\tau)=\kappa_0>0,$ the transaction fee should be set by $K_{\kappa_{0}}=\left(\frac{k_2-1}{k_2}\right)e^{-(a')^{3}\kappa_0k_2\sigma}$.\end{proposition}
\begin{remark}(i) Note that the higher targeted expectation level $\mu_0$ or variance level $\kappa_0,$ the higher transaction level $K$ should be set. (ii) Recall $a'<0, k_2>1$ thus $K_{\mu_{0}}, K_{\kappa_{0}}>\frac{k_2-1}{k_2}$ which is consistent with our previous result. \end{remark}
\subsection{Minimization of $L^{2}-$deviation of target time location}
Now we consider the case where the market manager $\mathcal{A}_0$ encourages the small agents to close their business (or sell their assets) at some pre-specified timing point $t_0.$ This corresponds to the situation when some local government plans to upgrading some declining industry in given future time in a very quick way. The empirical $L^{2}-$deviation with large population size $N$ is given by $\frac{1}{N}\sum_{i=1}^{N}(\tau_i^{*}-t_0)^{2}.$ Motivated by this situation, we consider the following $L^{2}-$deviation minimization problem:$$\min_{K}\mathbb{E}(\tau-t_0)^{2}.$$The following result is very straightforward.\begin{proposition} The optimal transaction fee of the above minimization problem is given by$$\arg\min_{K}\mathbb{E}(\tau-t_0)^{2}=\begin{cases} \frac{k_2-1}{k_2}\exp\left(\frac{k_2\sigma}{2}\left(\frac{1}{a'}-2t_0a'\right)\right), \quad \quad t_0>\frac{1}{2 (a')^{2}};\\ \frac{k_2-1}{k_2}, \quad \quad t_0\leq \frac{1}{2 (a')^{2}}.\end{cases}$$ \end{proposition}
\begin{proof}The simple calculation gives that\begin{equation}\nonumber\begin{aligned} &\mathbb{E}(\tau-t_0)^{2}=\mathbb{E}\tau^{2}-2t_0\mathbb{E}(\tau)+t_0^{2}\\ &=\frac{1}{(k_2\sigma a')^{2}}\left(\ln \frac{Kk_2}{k_2-1}\right)^{2}+\left(\frac{2t_0}{k_2\sigma a'}-\frac{1}{k_2\sigma (a')^{3}}\right)\left(\ln \frac{Kk_2}{k_2-1}\right)+t_0^{2} \end{aligned} \end{equation}Hence, if $\left(\frac{2t_0}{k_2\sigma a'}-\frac{1}{k_2\sigma (a')^{3}}\right) \geq 0 \Longleftrightarrow t_0 \leq \frac{1}{2 (a')^{2}},$ then $\mathbb{E}(\tau-t_0)^{2}$ becomes an increasing function of $K$ and the problem becomes trivial ($K=\frac{k_2-1}{k_2}.$) If $\left(\frac{2t_0}{k_2\sigma a'}-\frac{1}{k_2\sigma (a')^{3}}\right)<0 \Longleftrightarrow t_0>\frac{1}{2 (a')^{2}},$ then $\mathbb{E}(\tau-t_0)^{2}$ get its minimum at$$K=\frac{k_2-1}{k_2}e^{\frac{k_2\sigma}{2}\left(\frac{1}{a'}-2t_0a'\right)}.$$A direct check is as follows: note that $t_0>\frac{1}{2 a'^{2}}$ implies $\left(\frac{1}{a'}-2t_0 a'\right)>0$ thus the above optimal level $K>\frac{k_2-1}{k_2}$ which is consistent with our previous calculation.\end{proof}Now, we consider one related but more general case. Recall the transaction fee $K$ is charged by the market organizer thus it can be viewed as some revenue or income to $\mathcal{A}_0.$ Consequently, the large population optimal stopping problem discussed in Section II, will naturally generate some future cash flow $(\tau_i^{*}, K)_{i=1}^{N}$ to $\mathcal{A}_0.$ Therefore, besides the above concern of $L^{2}-$deviation minimization, the market manager can also consider how to maximize its utility to such cash flow:$$\sum_{i=1}^{N} \mathbb{E}\left(e^{-\beta \tau_i^{*}}U(K)\right)$$where $U$ is the given utility function. Recall $\beta>0$ is our discounted factor introduced before. Considering $N\longrightarrow +\infty$ so we study the cash flow utility but in per capital. Combined with the minimization of $L^{2}-$deviation, we reach the following more general optimization problem:\begin{equation}\label{mixed}\min_{K}\gamma_1\mathbb{E}(\tau-t_0)^{2}-\gamma_2\mathbb{E}e^{-\beta \tau}U(K)\end{equation}for $\gamma_1, \gamma_2>0.$ Here, the first term $\gamma_1\mathbb{E}(\tau-t_0)^{2}$ represents the concern to future distribution of stopping times while the second term $\gamma_2\mathbb{E}e^{-\beta \tau}U(K)$ is concerning to the future utility of cash flows. Note that the higher transaction fee $K$, the less incentive the individual agent will stop its own business at early time hence the less present value received by $\mathcal{A}_0.$ Therefore, the gain functional in \eqref{mixed} represents the trade-off between keeping future stopping as close to given timing point as possible, and the utility from future cash flow generated. The further calculation gives$$\mathbb{E}\left(e^{-\beta \tau}U(K)\right)=U(K)\mathbb{E}e^{-\beta \tau}=U(K)\left(\frac{x^{*}}{x}\right)^{-k_2}=\frac{U(K)}{K}\frac{k_2-1}{k_2}.$$The payoff functional becomes \begin{equation}\nonumber\begin{aligned} &\Gamma(K):=\gamma_1\mathbb{E}(\tau-t_0)^{2}-\gamma_2\mathbb{E}e^{-\beta \tau}U(K)\\ &=\frac{\gamma_1}{(k_2\sigma a')^{2}}\left(\ln \frac{Kk_2}{k_2-1}\right)^{2}+\gamma_1\left(\frac{2t_0}{k_2\sigma a'} -\frac{1}{k_2\sigma (a')^{3}}\right)\left(\ln \frac{Kk_2}{k_2-1}\right)-\frac{U(K)}{K}\frac{\gamma_2 (k_2-1)}{k_2}+\gamma_1t_0^{2}. \end{aligned} \end{equation}The first order necessary condition becomes:$$\Gamma'(K)=-\frac{\gamma_2(k_2-1)}{k_2}\left(U'(K)-\frac{U(K)}{K}\right)+\gamma_1\left(\frac{2t_0}{k_2\sigma a'}-\frac{1}{k_2\sigma (a')^{3}}\right)+\frac{2\gamma_1}{(k_2\sigma a')^{2}}\ln \frac{Kk_2}{k_2-1}=0.$$
\subsubsection{Linear function} One special case is $U(K)=K,$ and we have the same result in Proposition 5.3. \subsubsection{Power utility function} Another special case is $U(K)=\frac{K^{\rho}}{\rho}$ for $\rho<1,$ and we have$$\Gamma'(K)=\Theta(K)+\gamma_1\left(\frac{2t_0}{k_2\sigma a'}-\frac{1}{k_2\sigma (a')^{3}}\right)=0$$where\begin{equation}\nonumber \Theta(K)=-\gamma_2\left(1-\frac{1}{\rho}\right)\frac{k_2-1}{k_2}K^{\rho-1}+\frac{2\gamma_1}{(k_2\sigma a')^{2}}\ln \frac{Kk_2}{k_2-1} \end{equation}Note that\begin{equation}\nonumber\left\{ \begin{aligned} &\Gamma''(K)=\Theta'(K)=\frac{2\gamma_1}{(k_2\sigma a')^{2}}\frac{1}{K}-\gamma_2\left(\frac{k_2-1}{k_2}\right)\frac{(1-\rho)^{2}}{\rho}K^{\rho-2}\\ & \lim_{K\longrightarrow\frac{k_2-1}{k_2}}\Theta(K)=\gamma_2\left(\frac{1}{\rho}-1\right)\left(\frac{k_2-1}{k_2}\right)^{\rho}<+\infty,\\ &\lim_{K\longrightarrow +\infty}\Theta(K)=+\infty. \end{aligned}\right.\end{equation}Therefore, we have the following two cases. For simplification, we introduce the following notations:
\begin{equation}\nonumber\left\{ \begin{aligned} &\Delta_1=\left(\frac{2\gamma_1}{\gamma_2} \frac{1}{(k_2\sigma a')^{2}}\frac{\rho}{(1-\rho)^{2}}\right)^{\frac{1}{2-\rho}}\\ &\Delta_2=\gamma_1\left(\frac{2t_0}{k_2\sigma a'}-\frac{1}{k_2\sigma (a')^{3}}\right)+\gamma_2\left(\frac{1}{\rho}-1\right)\left(\frac{k_2-1}{k_2}\right)^{\rho}. \end{aligned}\right.\end{equation}
\textbf{Case 1}: if $\left(\frac{2\gamma_1}{\gamma_2} \frac{k_2}{k_2-1}\frac{1}{(k_2\sigma a')^{2}}\frac{\rho}{(1-\rho)^{2}}\right)^{\frac{1}{1-\rho}} < \frac{k_2-1}{k_2} \Longleftrightarrow \Delta_1<\left(\frac{k_2-1}{k_2}\right),$ then we have $\Gamma''(K)>0$ for $K \in (\frac{k_2-1}{k_2}, +\infty),$ thus it follows$$\arg\min_{K}\Gamma(K)=\begin{cases} \frac{k_2-1}{k_2}, \quad \text{if} \quad \Delta_2>0; \\ \text{unique solution of}\quad \Gamma'(K)=0, \quad \text{if} \quad \Delta_2<0.\end{cases}$$
\textbf{Case 2}: if $\left(\frac{2\gamma_1}{\gamma_2} \frac{k_2}{k_2-1}\frac{1}{(k_2\sigma a')^{2}}\frac{\rho}{(1-\rho)^{2}}\right)^{\frac{1}{1-\rho}} \geq \frac{k_2-1}{k_2} \Longleftrightarrow \Delta_1 \geq \left(\frac{k_2-1}{k_2}\right),$ then we have $\Gamma''(K)<0$ for $K \in \left(\frac{k_2-1}{k_2}, \Delta_1\right),$ and $\Gamma''(K)<0$ for $K \in \left(\Delta_1, +\infty\right)$, thus it follows$$\arg\min_{K}\Gamma(K)=\begin{cases} \frac{k_2-1}{k_2}, \quad \text{if} \quad \Delta_2 >0, \Theta(\Delta_1)>0; \\ \text{the larger root of}\quad \Gamma'(K)=0, \quad \text{if} \quad \Delta_2 >0, \Theta(\Delta_1)<0;\\ \text{the unique root of}\quad \Gamma'(K)=0, \quad \text{if} \quad \Delta_2 <0.\end{cases}$$
\subsection{Minimization of Kullback-Leibler (KL) divergence}
Now, we turn to consider the case when the market manager aims to have the empirical distribution $F_{N}$ of all stopping times, or its limit $F$ to best fit some given benchmark probability distribution $\pi(t)$ with support on $(0, +\infty)$. Moreover, assume $\pi$ is absolute continuous with respect to Lebesgue measure with density function $p(t)$. The economic meaning of such distribution tracking is as follows. Unlike \emph{Case A, B} (namely, \emph{target expectation/variance}, or $L^{2}-$\emph{minimization of timing point}) where the tracking is mainly in its static manner, sometimes the market manager hopes to have the tracking in more dynamic way. Actually, it is often the case that $\mathcal{A}_0$ hopes the stopping times will best fit into some distribution in given industries to match some long-term planning. For example, for some sunset or declining industries, the local government hopes to upgrading it in some progressive and orderly manner. Therefore, it is hopeful the individual small firms in this industry will close their business distributed by some given prescribed distribution function $p(t)$ to coordinate the corresponding fiscal or employment policies. Therefore, it aims to $\min_{K \in (\frac{k_2-1}{k_2}, +\infty)} ||\frac{1}{N}\sum_{i=1}^{N}\delta_{\tau_i}(0, t)-\pi(t) ||$ where $\delta$ is the Dirac measure. Alternatively, $\min_{K \in (\frac{k_2-1}{k_2}, +\infty)} ||F-\pi||$ in its limiting case when $N\longrightarrow +\infty$. Note that we require $K \in (\frac{k_2}{k_2-1}, +\infty)$ to make the optimal stopping problem in Section III nontrivial.
For sake of simplification, we focus on the limiting case in terms of the density function. Given the target density function $p(t),$ we can measure the performance of best fitting by the Kullback-Leibler (KL) divergence.\begin{definition}The Kullback-Leibler (KL) divergence of $q(t)$ from the targeted density function $p(t)$ is given by$$\mathcal{D}(p||q)=\int \ln \frac{p(t)}{q(t)}\cdot p(t)dt$$\end{definition}It is remarkable that $\mathcal{D}(p||q) \geq 0$ due to Jensen's inequality and $\mathcal{D}(p||q)=0 \Longleftrightarrow p=q$ due to Gibbs'inequality. Moreover, it follows that in general $\mathcal{D}(p||q) \neq \mathcal{D}(q||p)$ thus KL divergence is not distance. We have the following dynamic optimal tracking problem from large-population system.
\textbf{Minimization of KL-divergence.} To find $K \in (\frac{k_2-1}{k_2}, +\infty)$ satisfying \begin{equation}\nonumber\begin{aligned}
K \in \arg\min \mathcal{D}_{K}(p||q) \end{aligned} \end{equation}where $q(t)$ is the density function of Inverse Gaussian distribution $IG(\mu, \varrho)$ for\begin{equation} \begin{aligned} \mu=\left(-\frac{1}{a'k_2\sigma}\right)\ln \left(\frac{Kk_2}{k_2-1}\right), \quad \varrho=\left[\frac{1}{k_2\sigma}\ln \frac{Kk_2}{k_2-1}\right]^{2}. \end{aligned}\end{equation}Explicitly, the density function of inverse Gaussian distribution is given by\begin{equation}\nonumber\begin{aligned}q(t)&=\left[\frac{\varrho}{2\pi t^{3}}\right]^{\frac{1}{2}}\exp \left(\frac{-\varrho(t-\mu)^{2}}{2\mu^{2}t}\right)\\&=\frac{\ln\left(\frac{Kk_2}{k_2-1}\right)}{k_2\sigma(2\pi t^{3})^{\frac{1}{2}}}\cdot \exp\left[-\frac{(a')^{2}}{2}\frac{(t-\mu)^{2}}{t}\right]\\&=\frac{\ln\left(\frac{Kk_2}{k_2-1}\right)}{k_2\sigma(2\pi t^{3})^{\frac{1}{2}}}\cdot \exp\left[-\frac{(a')^{2}}{2}\frac{\left(t+\frac{1}{a'k_2\sigma}\ln (\frac{Kk_2}{k_2-1})\right)^{2}}{t}\right].\end{aligned}\end{equation} One algorithm based on the empirical risk minimization (ERM) (see \cite{Vapnik}) is as follows:\begin{equation}\nonumber\begin{aligned}
& \widehat{K}=\arg \min_{K \in (\frac{k_2-1}{k_2}, +\infty)}\mathcal{D}_{K}(p||q)=\arg \min \int \ln \frac{p(t)}{q(t)}\cdot p(t)dt\\ &=\arg \min \left(-\int \ln q(t)\cdot p(t)dt\right)\\ &\approx \arg \max \int \ln q(t)\left[\frac{1}{n}\sum_{i=1}^{n}\delta(t-t_i) \right]dt\\&=\arg \max \frac{1}{n}\sum_{i=1}^{n}\ln q(t_i)=\arg \max \frac{1}{n}\prod_{i=1}^{n}q(t_i) \end{aligned}\end{equation}which leads to the maximum likelihood estimator based on the sample points $\{t_i\}_{i=1}^{n}$ from distribution $p(t).$ Note that the definition of risk function (see \cite{Vapnik}): $$R(K)=-\int\ln q(t)p(t)dt$$ and we have the following identity\begin{equation}\nonumber\begin{aligned}
\mathcal{D}_{K}(p||q)&=\int \ln \frac{p(t)}{q(t)}\cdot p(t)dt\\ &=R(K)+\int \ln p(t)p(t)dt=R(K)+\text{Entropy of $p$}
\end{aligned}\end{equation}Recall the benchmark density function $p$ does not depend on the selection of $K$ hence the minimization of KL-divergence is equivalent to the minimization of risk function. Moreover, the justification of applying KL-divergence as our performance measure can also be found by the following Bretagnolle-Huber inequality (see \cite{Vaart}):$$||p-q||_{1}=\int |p(t)-q(t)|dt \leq 2\sqrt{1-\exp(-\mathcal{D}_{K}(p||q))}\leq \sqrt{2\mathcal{D}_{K}(p||q)}.$$That is, the smaller value of KL-divergence implies a smaller $L_{1}-$norm.
\section{Conclusion} In this paper, we introduce and analyze two classes of large population optimal stopping problems by considering the relative performance. The consistency condition and $\epsilon-$Nash equilibrium are established. We also discuss the inverse mean-field optimal stopping problem where the transaction fee $K$ is designed to meet some preferred statistical property of all stopping times generated. Our present paper suggests various future research directions. For example, it is possible to introduce and study the more general inverse mean-field optimal stopping: that is, to find a time-dependent function $\pi: [0, \infty)\longrightarrow \mathbb{R}$ such that the decentralized optimal stopping times $\{\tau^{*}_i\}_{i=1}^{N}$ for $$ \mathbb{E}\left[g\left(\tau_i, x_{i}(\tau_i), x^{N}(\tau_i)\right)+\pi(\tau_i))\right]$$will satisfy some given statistical property. Here, $x^{N}(\cdot)$ denotes some term characterizing the state-average, $\pi(\cdot)$ is called the transfer function (see e.g. \cite{Kruse}). Another example is to introduce some dynamics of market manager $\mathcal{A}_0$ and thus consider the optimal stopping problem with major-minor agents.
\end{document} | arXiv |
Centre of Mass and its Motion
Worked out Problems
Problem: A composite body is formed by joining a solid cylinder and a solid cone of the same radius. The length of the cylinder is $b$ and height of the cone is $h$. If the centre of mass of the composite body is located in the plane between the solid cone and the solid cylinder then the ratio $b/h$ is
(A) $1/\sqrt{6}$ (B) $1/2$ (C) $1/\sqrt{5}$ (D) $1/3$
Solution: Let the origin be located at the plane between the solid cone and the solid cylinder. Let $x$ axis is along the symmetric axis towards the right.The centre of mass of the solid cylinder is at a distance $x_1=-b/2$. The centre of mass of the solid cone is at a distance $x_2=h/4 $ (at distance h/4 from the base).
Let $\rho$ be the mass density and $r$ be the radius of the cylinder and the cone. The mass of the cylinder is $m_1=\rho (\pi r^2 b)$ and mass of the cone is $m_2=\rho (\frac{1}{3} \pi r^2 h)$. For the centre of mass of the composite body to be at origin, \begin{align} x_{cm}=\frac{m_1x1+m_2 x_2}{m_1+m_2}=\frac{\rho (\pi r^2 b) (-b/2) + \rho (\frac{1}{3} \pi r^2 h) (h/4) }{\rho (\pi r^2 b)+ \rho (\frac{1}{3} \pi r^2 h)}=0. \nonumber \end{align} Simplify to get $b/h=1/\sqrt{6}$.
Problem (IIT JEE 2009): Look at the drawing given in the figure which has been drawn with ink of uniform line-thickness. The mass of ink used to draw each of the two inner circles, and each of the two line segments is $m$. The mass of the ink used to draw the outer circle is $6m$. The coordinates of the centres of the different parts are: outer circle $(0,0)$, left inner circle $(-a,a)$, right inner circle $(a,a)$, vertical line $(0,0)$ and horizontal line $(0,-a)$. The $y$-coordinate of the centre of mass of the ink in this drawing is,
$a/10$
$a/8$
Solution: Due to uniform line thickness, the centre of mass of circle lies on its centre and that of a line segment lies in its middle. The $y$ coordinate of the centre of mass of the ink given in the drawing is given by, \begin{align} y_\text{cm}=\frac{\sum m_iy_i}{\sum m_i}=\frac{6m(0)+m(a)+m(a)+m(0)+m(-a)}{6m+m+m+m+m}=\frac{a}{10}.\nonumber \end{align}
Problem (IIT JEE 2003): Two point masses $m_1$ and $m_2$ are connected by a spring of spring constant $k$ and natural length $l_0$. The spring is compressed such that the two point masses touch each other and then are fastened by a string. Then the system is moved with a velocity $v_0$ along positive $x$-axis. When the system reaches the origin the string breaks $(t=0)$. The position of the point mass $m_1$ is given by $x_1=v_0 t-A(1-\cos\omega t)$ where $A$ and $\omega$ are constants. Find the position of the second block as a function of time. Also find the relation between $A$ and $l_0$.
Solution: Consider $m_1$, $m_2$, and the spring together as a system. At $t=0$, the position and velocity of the centre of mass of the system are, \begin{align} \label{pqb:eqn:1} &x_\text{cm}=\frac{m_1x_1+m_2x_2}{m_1+m_2}=0,\\ \label{pqb:eqn:2} &v_\text{cm}=\frac{\mathrm{d}x_\text{cm}}{\mathrm{d}t}=\frac{m_1v_1+m_2v_2}{m_1+m_2}=v_0. \end{align} There is no external force on the system. Hence, $v_\text{cm}$ remains constant at $v_0$. Integrate above equation with the initial condition $x_\text{cm}=0$ at $t=0$ to get, \begin{align} \label{pqb:eqn:3} x_\text{cm}=\frac{m_1x_1+m_2x_2}{m_1+m_2}=v_0 t. \end{align} Given, the displacement of $m_1$ at time $t$ is, \begin{align} \label{pqb:eqn:4} x_1=v_0t-A(1-\cos\omega t). \end{align} Substitute $x_1$ from fourth equation into third equation to get the displacement of $m_2$ as, \begin{align} \label{pqb:eqn:5} x_2=v_0t+\frac{m_1}{m_2}A(1-\cos\omega t). \end{align} The forces on $m_1$ and $m_2$ become zero whenever the distance between the two particles is equal to the natural length of the spring i.e., \begin{align} \label{pqb:eqn:6} x_2-x_1=A(m_1/m_2+1)(1-\cos\omega t)=l_0. \end{align} Newton's second law gives force on $m_1$ as, \begin{align} m_1\frac{\mathrm{d}^2x_1}{\mathrm{d}t^2}=-A\omega^2\cos\omega t, \end{align} which becomes zero whenever $\cos\omega t=0$. Substitute $\cos\omega t=0$ in above equation to get $l_0=\left(m_1/m_2+1\right)A$. | CommonCrawl |
\begin{definition}[Definition:O Notation/Big-O Notation/Complex/Point]
Let $z_0 \in \C$.
Let $f$ and $g$ be complex functions defined on a punctured neighborhood of $z_0$.
The statement:
:$\map f z = \map \OO {\map g z}$ as $z \to z_0$
is equivalent to:
:$\exists c \in \R_{\ge 0}: \exists \delta \in \R_{>0}: \forall z \in \C : \paren {0 < \cmod {z - z_0} < \delta \implies \cmod {\map f z} \le c \cdot \cmod {\map g z} }$
That is:
:$\cmod {\map f z} \le c \cdot \cmod {\map g z}$
for all $z$ in a punctured neighborhood of $z_0$.
Category:Definitions/Asymptotic Notation
Category:Definitions/Order Notation
Category:Definitions/Complex Analysis
\end{definition} | ProofWiki |
// then you click on the button try it, the text 'Paragraph changed' appeared on place of 'A Paragraph' function myFunctionDemo() { document.getElementById('demo').innerHTML = 'Paragraph changed.'; } Martin Saar Publications – Geothermal Energy and Geofluids
Martin Saar Publications
Prof. Dr. Martin O. Saar
Chair for Geothermal Energy and Geofluids
Email saarm(at)ethz.ch
My research interests are in geophysical fluid dynamics of subsurface multiscale, multiphase, multicomponent, reactive fluid (groundwater, hydrocarbon, CO2) and energy (heat, pressure) transport, such as water- and CO2-based geothermal energy utilization, geologic CO2 storage, grid-scale energy storage, enhanced oil recovery, and groundwater flow. Methods include computer simulations, laboratory experiments, and field analyses.
This professorship and the associated Geothermal Energy and Geofluids (GEG) Group is generously endowed by the Werner Siemens Foundation, which is hereby gratefully acknowledged.
[Go to Proceedings Refereed] [Go to Proceedings Non-Refereed] [Go to Theses]
Underlined names are links to current or past GEG members
REFEREED PUBLICATIONS IN JOURNALS
Li, Z., X. Ma, X.-Z. Kong, M.O. Saar, and D. Vogler, Permeability evolution during pressure-controlled shear slip in saw-cut and natural granite fractures, Rock Mechanics Bulletin, 2023. [Download PDF] [View Abstract]Fluid injection into rock masses is involved during various subsurface engineering applications. However, elevated fluid pressure, induced by injection, can trigger shear slip(s) of pre-existing natural fractures, resulting in changes of the rock mass permeability and thus injectivity. However, the mechanism of slip-induced permeability variation, particularly when subjected to multiple slips, is still not fully understood. In this study, we performed laboratory experiments to investigate the fracture permeability evolution induced by shear slip in both saw-cut and natural fractures with rough surfaces. Our experiments show that compared to saw-cut fractures, natural fractures show much small effective stress when the slips induced by triggering fluid pressures, likely due to the much rougher surface of the natural fractures. For natural fractures, we observed that a critical shear displacement value in the relationship between permeability and accumulative shear displacement: the permeability of natural fractures initially increases, followed by a permeability decrease after the accumulative shear displacement reaches a critical shear displacement value. For the saw-cut fractures, there is no consistent change in the measured permeability versus the accumulative shear displacement, but the first slip event often induces the largest shear displacement and associated permeability changes. The produced gouge material suggests that rock surface damage occurs during multiple slips, although, unfortunately, our experiments did not allow quantitatively continuous monitoring of fracture surface property changes. Thus, we attribute the slip-induced permeability evolution to the interplay between permeability reductions, due to damages of fracture asperities, and permeability enhancements, caused by shear dilation, depending on the scale of the shear displacement.
Ma, X., et al., M.O. Saar, and et al., Multi-disciplinary characterizations of the BedrettoLab – a new underground geoscience research facility, Solid Earth, 13, pp. 301-322, 2022. [Download PDF] [View Abstract]The increased interest in subsurface development (e.g., unconventional hydrocarbon, engineered geothermal systems (EGSs), waste disposal) and the associated (trig- gered or induced) seismicity calls for a better understand- ing of the hydro-seismo-mechanical coupling in fractured rock masses. Being able to bridge the knowledge gap be- tween laboratory and reservoir scales, controllable meso- scale in situ experiments are deemed indispensable. In an effort to access and instrument rock masses of hectometer size, the Bedretto Underground Laboratory for Geosciences and Geoenergies ("BedrettoLab") was established in 2018 in the existing Bedretto Tunnel (Ticino, Switzerland), with an average overburden of 1000 m. In this paper, we introduce the BedrettoLab, its general setting and current status. Com- bined geological, geomechanical and geophysical methods were employed in a hectometer-scale rock mass explored by several boreholes to characterize the in situ conditions and internal structures of the rock volume. The rock volume fea- tures three distinct units, with the middle fault zone sand- wiched by two relatively intact units. The middle fault zone unit appears to be a representative feature of the site, as sim- ilar structures repeat every several hundreds of meters along the tunnel. The lithological variations across the character- ization boreholes manifest the complexity and heterogene- ity of the rock volume and are accompanied by compart- mentalized hydrostructures and significant stress rotations. With this complexity, the characterized rock volume is con- sidered characteristic of the heterogeneity that is typically encountered in subsurface exploration and development. The BedrettoLab can adequately serve as a test-bed that allows for in-depth study of the hydro-seismo-mechanical response of fractured crystalline rock masses.
Javanmard, H., M. O. Saar, and D. Vogler, On the applicability of connectivity metrics to rough fractures under normal stress, Advances in Water Resources, 161/104122, 2022. [Download PDF] [View Abstract]Rough rock fractures have complex geometries which result in highly heterogeneous aperture fields. To accurately estimate the permeability of such fractures, heterogeneity of the aperture fields must be quantified. In this study heterogeneity of single rough rock fractures is for the first time parametrized by connectivity metrics, which quantify how connected the bounds of a heterogeneous field are. We use 3000 individual realizations of synthetic aperture fields with different statistical parameters and compute three connectivity metrics based on percolation theory for each realization. The sensitivity of the connectivity metrics with respect to the determining parameter, i.e the cutoff threshold, is studied and the correlation between permeability of the fractures and the computed connectivity metrics is presented. The results show that the $Theta$ connectivity metric predicts the permeability with higher accuracy. All three studied connectivity metrics provide better permeability estimations when a larger aperture value is chosen as the cutoff threshold. Overall, this study elucidates that using connectivity metrics provides a less expensive alternative to fluid flow simulations when an estimation of fracture permeability is desired.
Kong, X.-Z., M. Ahkami, I. Naets, and M.O. Saar, The role of high-permeability inclusion on solute transport in a 3D-printed fractured porous medium: An LIF-PIV integrated study, Transport in Porous Media, 2022. [Download PDF] [View Abstract]It is well-known that the presence of geometry heterogeneity in porous media enhances solute mass mixing due to fluid velocity heterogeneity. However, laboratory measurements are still sparse on characterization of the role of high-permeability inclusions on solute transport, in particularly concerning fractured porous media. In this study, the transport of solutes is quantified after a pulse-like injection of soluble fluorescent dye into a 3D-printed fractured porous medium with distinct high-permeability (H-k) inclusions. The solute concentration and the pore-scale fluid velocity are determined using laser-induced fluorescence and particle image velocimetry techniques. The migration of solute is delineated with its breakthrough curve (BC), temporal and spatial moments, and mixing metrics (including the scalar dissipation rate, the volumetric dilution index, and the flux-related dilution index) in different regions of the medium. With the same H-k inclusions, compared to a H-k matrix, the low-permeability (L-k) matrix displays a higher peak in its BC, less solute mass retention, a higher peak solute velocity, a smaller peak dispersion coefficient, a lower mixing rate, and a smaller pore volume being occupied by the solute. The flux-related dilution index clearly captures the striated solute plume tails following the streamlines along dead-end fractures and along the interface between the H-k and L-k matrices. We propose a normalization of the scalar dissipation rate and the volumetric dilution index with respect to the maximum regional total solute mass, which offers a generalized examination of solute mixing for an open region with a varying total solute mass. Our study presents insights into the interplay between the geometric features of the fractured porous medium and the solute transport behaviors at the pore scale.
Huang, P.-W., B. Flemisch, C.-Z. Qin, M.O. Saar, and A. Ebigbo, Relating Darcy-scale chemical reaction order to pore-scale spatial heterogeneity, Transport in Porous Media, 2022. [Download PDF] [View Abstract]Due to spatial scaling effects, there is a discrepancy in mineral dissolution rates measured at different spatial scales. Many reasons for this spatial scaling effect can be given. We investigate one such reason, i.e., how pore-scale spatial heterogeneity in porous media affects overall mineral dissolution rates. Using the bundle-of-tubes model as an analogy for porous media, we show that the Darcy-scale reaction order increases as the statistical similarity between the pore sizes and the effective-surface-area ratio of the porous sample decreases. The analytical results quantify mineral spatial heterogeneity using the Darcy-scale reaction order and give a mechanistic explanation to the usage of reaction order in Darcy-scale modeling. The relation is used as a constitutive relation of reactive transport at the Darcy scale. We test the constitutive relation by simulating flow-through experiments. The proposed constitutive relation is able to model the solute breakthrough curve of the simulations. Our results imply that we can infer mineral spatial heterogeneity of a porous media using measured solute concentration over time in a flow-through dissolution experiment.
Kyas, S., D. Volpatto, M.O. Saar, and A.M.M. Leal, Accelerated reactive transport simulations in heterogeneous porous media using Reaktoro and Firedrake, Computational Geosciences, 26, pp. 295-327, 2022. [Download PDF] [View Abstract]This work investigates the performance of the on-demand machine learning (ODML) algorithm introduced in Leal et al. (2020) when applied to different reactive transport problems in heterogeneous porous media. This approach was devised to accelerate the computationally expensive geochemical reaction calculations in reactive transport simulations. We demonstrate that even with strong heterogeneity present, the ODML algorithm speeds up these calculations by one to three orders of magnitude. Such acceleration, in turn, significantly advances the entire reactive transport simulation. The performed numerical experiments are enabled by the novel coupling of two open-source software packages: Reaktoro (Leal, 2015) and Firedrake (Rathgeber et al., 2016). The first library provides the most recent version of the ODML approach for the chemical equilibrium calculations, whereas, the second framework includes the newly implemented conservative Discontinuous Galerkin finite element scheme for the Darcy problem, i.e., the Stabilized Dual Hybrid Mixed (SDHM) method (Núñez et al., 2012).
Naets, I., M. Ahkami, P.-W. Huang, M. O. Saar, and X.-Z. Kong, Shear induced fluid flow path evolution in rough-wall fractures: A particle image velocimetry examination, Journal of Hydrology, 610/127793, 2022. [Download PDF] [View Abstract]Rough-walled fractures in rock masses, as preferential pathways, largely influence fluid flow, solute and energy transport. Previous studies indicate that fracture aperture fields could be significantly modified due to shear displacement along fractures. We report experimental observations and quantitative analyses of flow path evolution within a single fracture, induced by shear displacement. Particle image velocimetry and refractive index matching techniques were utilized to determine fluid velocity fields inside a transparent 3D-printed shear-able rough fracture. Our analysis indicate that aperture variability and correlation length increase with the increasing shear displacement, and they are the two key parameters, which govern the increases in velocity variability, velocity longitudinal correlation length, streamline tortuosity, and variability of streamline spacing. The increase in aperture heterogeneity significantly impacts fluid flow behaviors, whilst changes in aperture correlation length further refine these impacts. To our best knowledge, our study is the first direct measurements of fluid velocity fields and provides insights into the impact of fracture shear on flow behavior.
van Brummen, A.C., B.M. Adams, R. Wu, J.D. Ogland-Hand, and M.O. Saar, Using CO2-Plume Geothermal (CPG) Energy Technologies to Support Wind and Solar Power in Renewable-Heavy Electricity Systems , Renewable and Sustainable Energy Transition, (in press). [Download PDF] [View Abstract]CO2-Plume Geothermal (CPG) technologies are geothermal power systems that use geologically stored CO2 as the subsurface heat extraction fluid to generate renewable energy. CPG technologies can support variable wind and solar energy technologies by providing dispatchable power, while Flexible CPG (CPG- F) facilities can provide dispatchable power, energy storage, or both simultaneously. We present the first study investigating how CPG power plants and CPG-F facilities may operate as part of a renewable- heavy electricity system by integrating plant-level power plant models with systems-level optimization models. We use North Dakota, USA as a case study to demonstrate the potential of CPG to expand the geothermal resource base to locations not typically considered for geothermal power. We find that optimal system capacity for a solar-wind-CPG model can be up to 20 times greater than peak- demand. CPG-F facilities can reduce this modeled system capacity to just over 2 times peak demand by providing energy storage over both seasonal and short-term timescales. The operational flexibility of CPG-F facilities is further leveraged to bypass the ambient air temperature constraint of CPG power plants by storing energy at critical temperatures. Across all scenarios, a tax on CO2 emissions, on the order of hundreds of dollars per tonne, is required to financially justify using renewable energy over natural-gas power plants. Our findings suggest that CPG and CPG-F technologies may play a valuable role in future renewable-heavy electricity systems, and we propose a few recommendations to further study its integration potential.
Sakha, M., M. Nejati, A. Aminzadeh, S. Ghouli, M.O. Saar, and T. Driesner, On the validation of mixed-mode I/II crack growth theories for anisotropic rocks, International Journal of Solids and Structures, 241/111484, 2022. [Download PDF] [View Abstract]We evaluate the accuracy of three well-known fracture growth theories to predict crack trajectories in anisotropic rocks through comparison with new experimental data. The results of 99 fracture toughness tests on the metamorphic Grimsel Granite under four different ratios of mixed-mode I/II loadings are reported. For each ratio, the influence of the anisotropy orientation on the direction of fracture growth is also analyzed. Our results show that for certain loading configurations, the anisotropy offsets the loading influence in determining the direction of crack growth, whereas in other configurations, these factors reinforce each other. To evaluate the accuracy of the fracture growth theories, we compare the experiment results for the kink angle and the effective fracture toughness with the predictions of the maximum tangential stress (MTS), the maximum energy release rate (MERR), and the maximum strain energy density (MSED) criteria. The criteria are first assessed in their classical forms employed in the literature. It is demonstrated that the energy-based criteria in their classical formulation cannot yield good predictions. We then present modified forms of the ERR and SED functions in which the tensile and shear components are decomposed. These modified forms give significantly better predictions of fracture growth paths. The evaluation of the three criteria illustrates that the modified MSED criterion is the least accurate model even in the modified form, while the results predicted by MTS and modified MERR are well matched with the experimental results.
Ogland-Hand, J.D., S.M. Cohen, R.M. Kammer, K.M. Ellett, M.O. Saar, and J.A. Bennett, The Importance of Modeling Carbon Dioxide Transportation and Geologic Storage in Energy System Planning Tools, Frontiers, 10/855105, 2022. [Download PDF] [View Abstract]Energy system planning tools suggest that the cost and feasibility of climate-stabilizing energy transitions are sensitive to the cost of CO2 capture and storage processes (CCS), but the representation of CO2 transportation and geologic storage in these tools is often simple or non-existent. We develop the capability of producing dynamic-reservoir-simulation-based geologic CO2 storage supply curves with the Sequestration of CO2 Tool (SCO2T) and use it with the ReEDS electric sector planning model to investigate the effects of CO2 transportation and geologic storage representation on energy system planning tool results. We use a locational case study of the Electric Reliability Council of Texas (ERCOT) region. Our results suggest that the cost of geologic CO2 storage may be as low as $3/tCO2 and that site-level assumptions may affect this cost by several dollars per tonne. At the grid level, the cost of geologic CO2 storage has generally smaller effects compared to other assumptions (e.g., natural gas price), but small variations in this cost can change results (e.g., capacity deployment decisions) when policy renders CCS marginally competitive. The cost of CO2 transportation generally affects the location of geologic CO2 storage investment more than the quantity of CO2 captured or the location of electricity generation investment. We conclude with a few recommendations for future energy system researchers when modeling CCS. For example, assuming a cost for geologic CO2 storage (e.g., $5/tCO2) may be less consequential compared to assuming free storage by excluding it from the model.
Ge, S., and M.O. Saar, Review: Induced Seismicity during Geoenergy Development - a Hydromechanical Perspective, Journal of Geophysical Research: Solid Earth, 127/e2021JB02314, 2022. [Download PDF] [View Abstract]The basic triggering mechanism underlying induced seismicity traces back to the mid-1960s that relied on the process of pore-fluid pressure diffusion. The last decade has experienced a renaissance of induced seismicity research and data proliferation. An unprecedent opportunity is presented to us to synthesize the robust growth in knowledge. The objective of this paper is to provide a concise review of the triggering mechanisms of induced earthquakes with a focus on hydro-mechanical processes. Four mechanisms are reviewed: pore-fluid pressure diffusion, poroelastic stress, Coulomb static stress transfer, and aseismic slip. For each, an introduction of the concept is presented, followed by case studies. Diving into these mechanisms sheds light on several outstanding questions. For example, why did some earthquakes occur far from fluid injection or after injection stopped? Our review converges on the following conclusions: (1) Pore-fluid pressure diffusion remains a basic mechanism for initiating inducing seismicity in the near-field. (2) Poroelastic stresses and aseismic slip play an important role in inducing seismicity in regions beyond the influence of pore-fluid pressure diffusion. (3) Coulomb static stress transfer from earlier seismicity is shown to be a viable mechanism for increasing stresses on mainshock faults. (4) Multiple mechanisms have operated concurrently or consecutively at most induced seismicity sites. (5) Carbon dioxide injection is succeeding without inducing earthquakes and much can be learned from its success. Future research opportunities exist in deepening the understanding of physical and chemical processes in the nexus of geoenergy development and fluid motion in the Earth's crust.
Malek, A.E., B.M. Adams, E. Rossi, H.O. Schiegg, and M.O. Saar, Techno-economic analysis of Advanced Geothermal Systems (AGS), Renewable Energy, 2022. [Download PDF] [View Abstract]Advanced Geothermal Systems (AGS) generate electric power through a closed-loop circuit, after a working fluid extracts thermal energy from rocks at great depths via conductive heat transfer from the geologic formation to the working fluid through an impermeable wellbore wall. The slow conductive heat transfer rate present in AGS, compared to heat advection, makes AGS uneconomical to this date. To investigate what would be required to render AGS economical, we numerically model an example AGS using the genGEO simulator to obtain its electric power generation and its specific capital cost. Our numerical results show that using CO2 as the working fluid benefits AGS performance. Additionally, we find that there exists a working fluid mass flowrate, a lateral well length, and a wellbore diameter which minimize AGS costs. However, our results also show that AGS remain uneconomical with current, standard drilling technologies. Therefore, significant advancements in drilling technologies, that have the potential to reduce drilling costs by over 50%, are required to enable cost-competitive AGS implementations. Despite these challenges, the economic viability and societal acceptance potential of AGS are significantly raised when considering that negative externalities and their costs, so common for most other power plants, are practically non-existent with AGS.
Ezzat, M., B. M. Adams, M.O. Saar, and D. Vogler, Numerical Modeling of the Effects of Pore Characteristics on the Electric Breakdown of Rock for Plasma Pulse Geo Drilling, Energies, 15/1, 2022. [Download PDF] [View Abstract]Drilling costs can be 80% of geothermal project investment, so decreasing these deep drilling costs substantially reduces overall project costs, contributing to less expensive geothermal electricity or heat generation. Plasma Pulse Geo Drilling (PPGD) is a contactless drilling technique that uses high-voltage pulses to fracture the rock without mechanical abrasion, which may reduce drilling costs by up to 90% of conventional mechanical rotary drilling costs. However, further development of PPGD requires a better understanding of the underlying fundamental physics, specifically the dielectric breakdown of rocks with pore fluids subjected to high-voltage pulses. This paper presents a numerical model to investigate the effects of the pore characteristics (i.e., pore fluid, shape, size, and pressure) on the occurrence of the local electric breakdown (i.e., plasma formation in the pore fluid) inside the granite pores and thus on PPGD efficiency. Investigated are: (i) two pore fluids, consisting of air (gas) or liquid water; (ii) three pore shapes, i.e., ellipses, circles, and squares; (iii) pore sizes ranging from 10 to 150 μm; (iv) pore pressures ranging from 0.1 to 2.5 MPa. The study shows how the investigated pore characteristics affect the local electric breakdown and, consequently, the PPGD process.
Fleming, M.R., B.M. Adams, J.D. Ogland-Hand, J.M. Bielicki, T.H. Kuehn, and M.O. Saar, Flexible CO2-Plume Geothermal (CPG-F): Using Geologically Stored CO2 to Provide Dispatchable Power and Energy Storage, Energy Conversion and Management, 253/115082, 2022. [Download PDF] [View Abstract]CO2-Plume Geothermal (CPG) power plants can use geologically stored CO2 to generate electricity. In this study, a Flexible CO2 Plume Geothermal (CPG-F) facility is introduced, which can use geologically stored CO2 to provide dispatchable power, energy storage, or both dispatchable power and energy storage simultaneously—providing baseload power with dispatchable storage for demand response. It is found that a CPG-F facility can deliver more power than a CPG power plant, but with less daily energy production. For example, the CPG-F facility produces 7.2 MWe for 8 hours (8h-16h duty cycle), which is 190% greater than power supplied from a CPG power plant, but the daily energy decreased by 61% from 60 MWe-h to 23 MWe-h. A CPG-F facility, designed for varying durations of energy storage, has a 70% higher capital cost than a CPG power plant, but costs 4% to 27% more than most CPG-F facilities, designed for a specific duration, while producing 90% to 310% more power than a CPG power plant. A CPG-F facility, designed to switch from providing 100% dispatchable power to 100% energy storage, only costs 3% more than a CPG-F facility, designed only for energy storage.
Ezekiel, J., B.M. Adams, M.O. Saar, and A. Ebigbo, Numerical analysis and optimization of the performance of CO2-Plume Geothermal (CPG) production wells and implications for electric power generation, Geothermics, 98/102270, 2022. [Download PDF] [View Abstract]CO2-Plume Geothermal (CPG) power plants can produce heat and/or electric power. One of the most important parameters for the design of a CPG system is the CO2 mass flowrate. Firstly, the flowrate determines the power generated. Secondly, the flowrate has a significant effect on the fluid pressure drawdown in the geologic reservoir at the production well inlet. This pressure drawdown is important because it can lead to water flow in the reservoir towards and into the borehole. Thirdly, the CO2 flowrate directly affects the two-phase (CO2 and water) flow regime within the production well. An annular flow regime, dominated by the flow of the CO2 phase in the well, is favorable to increase CPG efficiency. Thus, flowrate optimizations of CPG systems need to honor all of the above processes. We investigate the effects of various operational parameters (maximum flowrate, ad- missible reservoir-pressure drawdown, borehole diameter) and reservoir parameters (permeability anisotropy and relative permeability curves) on the CO2 and water flow regime in the production well and on the power generation of a CPG system. We use a numerical modeling approach that couples the reservoir processes with the well and power plant systems. Our results show that water accumulation in the CPG vertical production well can occur. However, with proper CPG system design, it is possible to prevent such water accumulation in the pro- duction well and to maximize CPG electric power output.
Ma, X., M. Hertrich, et. al, F. Amann, V. Gischig, T. Driesner, S. Löw, H. Maurer, M.O. Saar, S. Wiemer, and D. Giardini, Multi-disciplinary characterizations of the Bedretto Lab - a unique underground geoscience research facility, Solid Earth, 2021. [Download PDF] [View Abstract]Xiaodong Ma1, Marian Hertrich1, Kai Bröker1, Nima Gholizadeh Doonechaly1, Rebecca Hochreutener1, Philipp Kästli1, Hannes Krietsch3, Michèle Marti1, Barbara Nägeli1, Morteza Nejati1, Anne Obermann21, Katrin Plenkers1, Alexis Shakas1, Linus Villiger1, Quinn Wenning1, Alba Zappone1, Falko Bethmann2, Raymi Castilla2, Francisco Seberto2, Peter Meier2, Florian Amann3, Valentin Gischig4, Thomas Driesner1, Simon Löw1, Hansruedi Maurer1, Martin O. Saar1, Stefan Wiemer1, Domenico Giardini1 1Department of Earth Sciences, ETH Zürich, Zürich, 8092, Switzerland 2 Swiss Seismological Service, ETH Zurich, Zürich, 8092, Switzerland 2Geo-Energie Suisse, AG, Zürich, 8004, Switzerland 3Engineering Geology and Hydrogeology, RWTH Aachen, Aachen, 52062, Germany 4CSD Ingenieure AG, Liebefeld, 3097, Switzerland Correspondence to: Xiaodong Ma ([email protected])
Mindel, J.E., P. Alt-Eppig, A.A. Les Landes, S. Beernink, D.T. Birdsell, M. Bloemendal, V. Hamm, et al , M.O. Saar, D. Van den Heuvel, and T. Driesner, Benchmark study of simulators for thermo-hydraulic modelling of low enthalpy geothermal processes, Geothermics, 96/102130, 2021. [Download PDF] [View Abstract]In order to assess the thermo-hydraulic modelling capabilities of various geothermal simulators, a comparative test suite was created, consisting of a set of cases designed with conditions relevant to the low-enthalpy range of geothermal operations within the European HEATSTORE research project. In an effort to increase confidence in the usage of each simulator, the suite was used as a benchmark by a set of 10 simulators of diverse origin, formulation, and licensing characteristics: COMSOL, MARTHE, ComPASS, Nexus-CSMP++, MOOSE, SEAWATv4, CODE_BRIGHT, Tough3, PFLOTRAN, and Eclipse 100. The synthetic test cases (TCs) consist of a transient pressure test verification (TC1), a well-test comparison (TC2), a thermal transport experiment validation (TC3), and a convection onset comparison (TC4), chosen to represent well-defined subsets of the coupled physical processes acting in subsurface geothermal operations. The results from the four test cases were compared among the participants, to known analytical solutions, and to experimental measurements where applicable, to establish them as reference expectations for future studies. A basic description, problem specification, and corresponding results are presented and discussed. Most participating simulators were able to perform most tests reliably at a level of accuracy that is considered sufficient for application to modelling tasks in real geothermal projects. Significant relative deviations from the reference solutions occurred where strong, sudden (e.g. initial) gradients affected the accuracy of the numerical discretization, but also due to sub-optimal model setup caused by simulator limitations (e.g. providing an equation of state for water properties).
Ezekiel, J., D. Kumbhat, A. Ebigbo, B.M. Adams, and M.O. Saar, Sensitivity of Reservoir and Operational Parameters on the Energy Extraction Performance of Combined CO2-EGR–CPG Systems, Energies, 14/6122, 2021. [Download PDF] [View Abstract]There is a potential for synergy effects in utilizing CO2 for both enhanced gas recovery (EGR) and geothermal energy extraction (CO2-plume geothermal, CPG) from natural gas reservoirs. In this study, we carried out reservoir simulations using TOUGH2 to evaluate the sensitivity of natural gas recovery, pressure buildup, and geothermal power generation performance of the combined CO2-EGR–CPG system to key reservoir and operational parameters. The reservoir parameters included horizontal permeability, permeability anisotropy, reservoir temperature, and pore-size- distribution index; while the operational parameters included wellbore diameter and ambient surface temperature. Using an example of a natural gas reservoir model, we also investigated the effects of different strategies of transitioning from the CO2-EGR stage to the CPG stage on the energy-recovery performance metrics and on the two-phase fluid-flow regime in the production well. The simulation results showed that overlapping the CO2-EGR and CPG stages, and having a relatively brief period of CO2 injection, but no production (which we called the CO2-plume establishment stage) achieved the best overall energy (natural gas and geothermal) recovery performance. Permeability anisotropy and reservoir temperature were the parameters that the natural gas recovery performance of the combined system was most sensitive to. The geothermal power generation performance was most sensitive to the reservoir temperature and the production wellbore diameter. The results of this study pave the way for future CPG-based geothermal power-generation optimization studies. For a CO2-EGR–CPG project, the results can be a guide in terms of the required accuracy of the reservoir parameters during exploration and data acquisition.
Ezzat, M., D. Vogler, M. O. Saar, and B. M. Adams, Simulating Plasma Formation in Pores under Short Electric Pulses for Plasma Pulse Geo Drilling (PPGD), Energies, 14/16, 2021. [Download PDF] [View Abstract]
Plasma Pulse Geo Drilling (PPGD) is a contact-less drilling technique, where an electric discharge across a rock sample causes the rock to fracture. Experimental results have shown PPGD drilling operations are successful if certain electrode spacings, pulse voltages, and pulse rise times are given. However, the underlying physics of the electric breakdown within the rock, which cause damage in the process, are still poorly understood.
This study presents a novel methodology to numerically study plasma generation for electric pulses between 200 to 500 kV in rock pores with a width between 10 and 100 \(\mu\)m. We further investigate whether the pressure increase, induced by the plasma generation, is sufficient to cause rock fracturing, which is indicative of the onset of drilling success.
We find that rock fracturing occurs in simulations with a 100 \(\mu\)m. pore size and an imposed pulse voltage of approximately 400 kV. Furthermore, pulses with voltages lower than 400 kV induce damage near the electrodes, which expands from pulse to pulse, and eventually, rock fracturing occurs. Additionally, we find that the likelihood for fracturing increases with increasing pore voltage drop, which increases with pore size, electric pulse voltage, and rock effective relative permittivity while being inversely proportional to the rock porosity and pulse rise time.
Birdsell, D. T., B. M. Adams, and M. O. Saar, Minimum Transmissivity and Optimal Well Spacing and Flow Rate for High-Temperature Aquifer Thermal Energy Storage, Applied Energy, 289/116658, pp. 1-14, 2021. [Download PDF] [View Abstract]Aquifer thermal energy storage (ATES) is a time-shifting thermal energy storage technology where waste heat is stored in an aquifer for weeks or months until it may be used at the surface. It can reduce carbon emissions and HVAC costs. Low-temperature ($<25$ \degree C) aquifer thermal energy storage (LT-ATES) is already widely-deployed in central and northern Europe, and there is renewed interest in high-temperature ($>50$ \degree C) aquifer thermal energy storage (HT-ATES). However, it is unclear if LT-ATES guidelines for well spacing, reservoir depth, and transmissivity will apply to HT-ATES. We develop a thermo-hydro-mechanical-economic (THM\$) analytical framework to balance three reservoir-engineering and economic constraints for an HT-ATES doublet connected to a district heating network. We find the optimal well spacing and flow rate are defined by the ``reservoir constraints'' at shallow depth and low permeability and are defined by the ``economic constraints'' at great depth and high permeability. We find the optimal well spacing is 1.8 times the thermal radius. We find that the levelized cost of heat is minimized at an intermediate depth. The minimum economically-viable transmissivity (MEVT) is the transmissivity below which HT-ATES is sure to be economically unattractive. We find the MEVT is relatively insensitive to depth, reservoir thickness, and faulting regime. Therefore, it can be approximated as $5\cdot 10^{-13}$ m$^3$. The MEVT is useful for HT-ATES pre-assessment and can facilitate global estimates of HT-ATES potential.
Samrock, F., A.V. Grayver, O. Bachmann, Ö. Karakas, and M.O. Saar, Integrated magnetotelluric and petrological analysis of felsic magma reservoirs: Insights from Ethiopian rift volcanoes , Earth and Planetary Science Letters, 559/116765, 2021. [Download PDF] [View Abstract]Geophysical and petrological probes are key to understanding the structure and the thermochemical state of active magmatic systems. Recent advances in laboratory analyses, field investigations and numerical methods have allowed increasingly complex data-constraint models with new insights into magma plumbing systems and melt evolution. However, there is still a need for methods to quantitatively link geophysical and petrological observables for a more consistent description of magmatic processes at both micro- and macro-scales. Whilst modern geophysical studies provide detailed 3-D subsurface images that help to characterize magma reservoirs by relating state variables with physical material properties, constraints from on-site petrological analyses and thermodynamic modelling of melt evolution are at best incorporated qualitatively. Here, we combine modelling of phase equilibria in cooling magma and laboratory measurements of electrical properties of melt to derive the evolution of electrical conductivity in a crystallizing silicic magmatic system. We apply this framework to 3-D electrical conductivity images from magnetotelluric studies of two volcanoes in the Ethiopian Rift. The presented approach enables us to constrain key variables such as melt content, temperature and magmatic volatile abundance at depth. Our study shows that accounting for magmatic volatiles as an independent phase is crucial for understanding electrical conductivity structures in magma reservoirs at an advanced state of crystallization. Furthermore, our results deepen the understanding of the mechanisms behind volcanic unrest and help assess the long-term potential of hydrothermal reservoirs for geothermal energy production.
Lima, M., H. Javanmard, D. Vogler, M.O. Saar, and X.-Z. Kong, Flow-through Drying during CO2 Injection into Brine-filled Natural Fractures: A Tale of Effective Normal Stress, International Journal of Greenhouse Gas Control, 109, pp. 103378, 2021. [Download PDF] [View Abstract]Injecting supercritical CO2 (scCO2) into brine-filled fracture-dominated reservoirs causes brine displacement and possibly evaporite precipitations that alter the fracture space. Here, we report on isothermal near-field experiments on scCO2-induced flow-through drying in a naturally fractured granodiorite specimen under effective normal stresses of 5-10 MPa, where two drying regimes are identified. A novel approach is developed to delineate the evolution of brine saturation and relative permeability from fluid production and differential pressure measurements. Under higher compressive stresses, the derived relative permeability curves indicate lower mobility of brine and higher mobility of the scCO2 phase. The derived fractional flow curves also suggest an increase in channelling and a decrease in brine displacement efficiencies under higher compressive stresses. Finally, lowering compressive stresses seems to hinder water evaporation. Our experimental results assist in understanding the behaviour of the injectivity of fractures and fracture networks during subsurface applications that involve scCO2 injection into saline formations.
Ma, J., M. Ahkami, M.O. Saar, and X.-Z. Kong, Quantification of mineral accessible surface area and flow-dependent fluid-mineral reactivity at the pore scale, Chemical Geology, 563, pp. 120042, 2021. [Download PDF] [View Abstract]Accessible surface areas (ASAs) of individual rock-forming minerals exert a fundamental control on the maximum mineral reactivity with formation fluids. Notably, ASA efficiency during fluid-rock reactions can vary by orders of magnitude, depending on the inflow fluid chemistry and the velocity field. Due to the lack of adequate quantification methods, determining the mineral-specific ASAs and their reaction efficiency still remain extremely difficult. Here, we first present a novel joint method that appropriately calculates ASAs of individual minerals in a multi-mineral sandstone. This joint method combines SEM-image processing results and Brunauer-Emmett-Teller (BET) surface area measurements by a Monte-Carlo algorithm to derive scaling factors and ASAs for individual minerals at the resolution of BET measurements. Using these atomic-scale ASAs, we then investigate the impact of flow rate on the ASA efficiency in mineral dissolution reactions during the injection of CO2-enriched brine. This is done by conducting a series of pore-scale reactive transport simulations, using a two-dimensional (2D) scanning electron microscopy (SEM) image of this sandstone. The ASA efficiency is determined employing a domain-averaged dissolution rate and the effective surface area of the most reactive phase in the sandstone (dolomite). As expected, the dolomite reactivity is found to increase with the flow rate, due to the on average high fluid reactivity. The surface efficiency increases slightly with the fluid flow rate, and reaches a relatively stable value of about 1%. The domain averaged method is then compared with the in-out averaged method (i.e the "Black-box" approach), which is often used to analyzed the experimental observations. The in-out averaged method yields a considerable overestimation of the fluid reactivity, a small underestimation of the dolomite reactivity, and a considerable underestimation of the ASA efficiency. The discrepancy between the two methods is becoming smaller when the injection rate increases. Our comparison suggests that the result interpretation of the in-out averaged method should be contemplated, in particular, when the flow rate is small. Nonetheless, our proposed ASA determination method should facilitate accurate calculations of fluid-mineral reactivity in large-scale reactive transport simulations, and we advise that an upscaling of the ASA efficiency needs to be carefully considered, due to the low surface efficiency.
Javanmard, H., A. Ebigbo, S.D.C. Walsh, M.O. Saar, and D. Vogler, No-Flow Fraction (NFF) permeability model for rough fractures under normal stress, Water Resources Research, 57/3, 2021. [Download PDF] [View Abstract]Flow through rock fractures is frequently represented using models that correct the cubic law to account for the effects of roughness and contact area. However, the scope of such models is often restricted to relatively smooth aperture fields under small confining stresses. This work studies the link between fracture permeability and fracture geometry under normal loads. Numerical experiments are performed to deform synthesized aperture fields of various correlation lengths and roughness values under normal stress. The results demonstrate that aperture roughness can more than triple for applied stresses up to 50 MPa – exceeding the valid range for roughness in most previously published models. Investigating the relationship between permeability and contact area indicates that the increase in flow obstructions due to the development of new contact points strongly depends on the correlation length of the unloaded aperture field. This study eliminates these dependencies by employing a parameter known as the No-Flow Fraction (NFF) to capture the effect of stagnation zones. With this concept, a new Cubic-law-based permeability model is proposed that significantly improves the accuracy of permeability estimations, compared to previous models. For cases, where the NFF is difficult to obtain, we introduce an empirical relationship to estimate the parameter from the aperture roughness. The new models yield permeability estimates accurate to within a factor of two of the simulated permeability in over three quarters of the 3000 deformed fractures studied. This compares with typical deviations of at least one order of magnitude for previously published permeability models.
Ogland-Hand, J., J. Bielicki, B. Adams, E. Nelson, T. Buscheck, M.O. Saar, and R. Sioshansi, The Value of CO2-Bulk Energy Storage with Wind in Transmission-Constrained Electricity Systems, Energy Conversion and Management, 2021. [Download PDF] [View Abstract]High-voltage direct current (HVDC) transmission infrastructure can transmit electricity from regions with high-quality variable wind and solar resources to those with high electricity demand. In these situations, bulk energy storage (BES) could beneficially increase the utilization of HVDC transmission capacity. Here, we investigate that benefit for an emerging BES approach that uses geologically stored CO2 and sedimentary basin geothermal resources to time-shift variable electricity production. For a realistic case study of a 1 GW wind farm in Eastern Wyoming selling electricity to Los Angeles, California (U.S.A.), our results suggest that a generic CO2-BES design can increase the utilization of the HVDC transmission capacity, thereby increasing total revenue across combinations of electricity prices, wind conditions, and geothermal heat depletion. The CO2-BES facility could extract geothermal heat, dispatch geothermally generated electricity, and time-shift wind-generated electricity. With CO2-BES, total revenue always increases and the optimal HVDC transmission capacity increases in some combinations. To be profitable, the facility needs a modest $7.78/tCO2 to $10.20/tCO2, because its cost exceeds the increase in revenue. This last result highlights the need for further research to understand how to design a CO2-BES facility that is tailored to the geologic setting and its intended role in the energy system.
Adams, B.M., D. Vogler, T.H. Kuehn, J.M. Bielicki, N. Garapati, and M.O. Saar, Heat Depletion in Sedimentary Basins and its Effect on the Design and Electric Power Output of CO2 Plume Geothermal (CPG) Systems, Renewable Energy, 172, pp. 1393-1403, 2021. [Download PDF] [View Abstract]CO2 Plume Geothermal (CPG) energy systems circulate geologically stored CO2 to extract geothermal heat from naturally permeable sedimentary basins. CPG systems can generate more electricity than brine systems in geologic reservoirs with moderate temperature and permeability. Here, we numerically simulate the temperature depletion of a sedimentary basin and find the corresponding CPG electricity generation variation over time. We find that for a given reservoir depth, temperature, thickness, permeability, and well configuration, an optimal well spacing provides the largest average electric generation over the reservoir lifetime. If wells are spaced closer than optimal, higher peak electricity is generated, but the reservoir heat depletes more quickly. If wells are spaced greater than optimal, reservoirs maintain heat longer but have higher resistance to flow and thus lower peak electricity is generated. Additionally, spacing the wells 10% greater than optimal affects electricity generation less than spacing wells 10% closer than optimal. Our simulations also show that for a 300 m thick reservoir, a 707 m well spacing provides consistent electricity over 50 years, whereas a 300 m well spacing yields large heat and electricity reductions over time. Finally, increasing injection or production well pipe diameters does not necessarily increase average electric generation.
Leal, A.M.M., S. Kyas, D. Kulik, and M.O. Saar, Accelerating Reactive Transport Modeling: On‑Demand Machine Learning Algorithm for Chemical Equilibrium Calculations, Transport in Porous Media, 133, pp. 161-204, 2020. [Download PDF] [View Abstract]During reactive transport modeling, the computing cost associated with chemical equilibrium calculations can be 10 to 10,000 times higher than that of fluid flow, heat transfer, and species transport computations. These calculations are performed at least once per mesh cell and once per time step, amounting to billions of them throughout the simulation employing high-resolution meshes. To radically reduce the computing cost of chemical equilibrium calculations (each requiring an iterative solution of a system of nonlinear equa-tions), we consider an on-demand machine learning algorithm that enables quick and accu-rate prediction of new chemical equilibrium states using the results of previously solved chemical equilibrium problems within the same reactive transport simulation. The training operations occur on-demand, rather than before the start of the simulation when it is not clear how many training points are needed to accurately and reliably predict all possible chemical conditions that may occur during the simulation. Each on-demand training opera-tion consists of fully solving the equilibrium problem and storing some key information about the just computed chemical equilibrium state (which is used subsequently to rap-idly predict similar states whenever possible). We study the performance of the on-demand learning algorithm, which is mass conservative by construction, by applying it to a reactive transport modeling example and achieve a speed-up of one or two orders of magnitude (depending on the activity model used). The implementation and numerical tests are car-ried out in Reaktoro (reakt oro.org), a unified open-source framework for modeling chemi-cally reactive systems.
Garapati, N., B.M. Adams, M.R. Fleming, T.H. Kuehn, and M.O. Saar, Combining brine or CO2 geothermal preheating with low-temperature waste heat: A higher-efficiency hybrid geothermal power system, Journal of CO2 Utilization, 42, 2020. [Download PDF] [View Abstract]Hybrid geothermal power plants operate by using geothermal fluid to preheat the working fluid of a higher temperature power cycle for electricity generation. This has been shown to yield higher electricity generation than the combination of a stand-alone geothermal power plant and the higher-temperature power cycle. Here, we test both a direct CO2 hybrid geothermal system and an indirect brine hybrid geothermal system. The direct CO2 hybrid geothermal system is a CO2 Plume Geothermal (CPG) system, which uses CO2 as the subsurface working fluid, but with auxiliary heat addition to the geologically produced CO2 at the surface. The indirect brine geothermal system uses the hot geologically produced brine to preheat the secondary working fluid (CO2) within a secondary power cycle. We find that the direct CPG-hybrid system and the indirect brine-hybrid system both can generate 20 % more electric power than the summed power of individual geothermal and auxiliary systems in some cases. Each hybrid system has an optimum turbine inlet temperature which maximizes the electric power generated, and is typically between 100 ◦C and 200 ◦C in the systems examined. The optimum turbine inlet temperature tends to occur where the geothermal heat contribution is between 50 % and 70 % of the total heat addition to the hybrid system. Lastly, the CO2 direct system has lower wellhead temperatures than indirect brine and therefore can utilize lower temperature resources.
Lima, M., P. Schädle, C. Green, D. Vogler, M.O. Saar, and X.-Z. Kong, Permeability Impairment and Salt Precipitation Patterns during CO2 Injection into Single Natural Brine-filled Fractures, Water Resources Research, 56/8, pp. e2020WR027213, 2020. [Download PDF] [View Abstract]Formation dry-out in fracture-dominated geological reservoirs may alter the fracture space, impair rock absolute permeability and cause a significant decrease in well injectivity. In this study, we numerically model the dry-out processes occurring during supercritical CO2 (scCO2) injection into single brine-filled fractures and evaluate the potential for salt precipitation under increasing effective normal stresses in the evaporative regime. We use an open-source, parallel finite-element framework to numerically model two-phase flow through 2-Dimensional fracture planes with aperture fields taken from naturally fractured granite cores at the Grimsel Test Site in Switzerland. Our results reveal a displacement front and a subsequent dry-out front in all simulated scenarios, where higher effective stresses caused more flow channeling, higher rates of water evaporation and larger volumes of salt precipitates. However, despite the larger salt volumes, the permeability impairment was lower at higher effective normal stresses. We conclude that the spatial distribution of the salt, precipitated in fractures with heterogeneous aperture fields, strongly affects the absolute permeability impairment caused by formation dry-out. The numerical simulations assist in understanding the behavior of the injectivity in fractures and fracture networks during subsurface applications that involve scCO2 injection into brine.
Hefny, M., C.-Z. Qin, M.O. Saar, and A. Ebigbo, Synchrotron-based pore-network modeling of two-phase flow in Nubian Sandstone and implications for capillary trapping of carbon dioxide, International Journal of Greenhouse Gas Control, 103/1031642, 2020. [Download PDF] [View Abstract]Depleted oil fields in the Gulf of Suez (Egypt) can serve as geothermal reservoirs for power production using a CO2-Plume Geothermal (CPG) system, while geologically sequestering CO2. This entails the injection of a substantial amount of CO2 into the highly permeable brine-saturated Nubian Sandstone. Numerical models of two-phase flow processes are indispensable for predicting the CO2-plume migration at a representative geological scale. Such models require reliable constitutive relationships, including relative permeability and capillary pressure curves. In this study, quasi-static pore-network modeling has been used to simulate the equilibrium positions of fluid-fluid interfaces, and thus determine the capillary pressure and relative permeability curves. Three-dimensional images with a voxel size of 0.65 μm3 of a Nubian Sandstone rock sample have been obtained using Synchrotron Radiation X-ray Tomographic Microscopy. From the images, topological properties of pores/throats were constructed. Using a pore-network model, we performed a sequential primary drainage–main imbibition cycle of quasi-static invasion in order to quantify (1) the CO2 and brine relative permeability curves, (2) the effect of initial wetting-phase saturation (i.e. the saturation at the point of reversal from drainage to imbibition) on the residual–trapping potential, and (3) study the relative permeability–saturation hysteresis. The results illustrate the sensitivity of the pore-scale fluid-displacement and trapping processes on some key parameters (i.e. advancing contact angle, pore-body-to-throat aspect ratio, and initial wetting-phase saturation) and improve our understanding of the potential magnitude of capillary trapping in Nubian Sandstone.
Gischig, V.S., D. Giardini, F. Amann, "et al.", Keith F. Evans, "et al.", A. Kittilä, X. Ma, "et al.", M.O. Saar, and "et al.", Hydraulic stimulation and fluid circulation experiments in underground laboratories: Stepping up the scale towards engineered geothermal systems, Geomechanics for Energy and the Environment, 100175, 2020. [Download PDF] [View Abstract]The history of reservoir stimulation to extract geothermal energy from low permeability rock (i.e. so-called petrothermal or engineered geothermal systems, EGS) highlights the difficulty of creating fluid pathways between boreholes, while keeping induced seismicity at an acceptable level. The worldwide research community sees great value in addressing many of the unresolved problems in down-scaled in-situ hydraulic stimulation experiments. Here, we present the rationale, concepts and initial results of stimulation experiments in two underground laboratories in the crystalline rocks of the Swiss Alps. A first experiment series at the 10 m scale was completed in 2017 at the Grimsel Test Site, GTS. Observations of permeability enhancement and induced seismicity show great variability between stimulation experiments in a small rock mass body. Monitoring data give detailed insights into the complexity of fault stimulation induced by highly heterogeneous pressure propagation, the formation of new fractures and stress redistribution. Future experiments at the Bedretto Underground Laboratory for Geoenergies, BULG, are planned to be at the 100 m scale, closer to conditions of actual EGS projects, and a step closer towards combining fundamental process-oriented research with testing techniques proposed by industry partners. Thus, effective and safe hydraulic stimulation approaches can be developed and tested, which should ultimately lead to an improved acceptance of EGS.
Vogler, D., S.D.C. Walsh, and M.O. Saar, A Numerical Investigation into Key Factors Controlling Hard Rock Excavation via Electropulse Stimulation, Journal of Rock Mechanics and Geotechnical Engineering, 12/4, pp. 793-801, 2020. [Download PDF] [View Abstract]Electropulse stimulation provides an energy-efficient means of excavating hard rocks through repeated application of high voltage pulses to the rock surface. As such, it has the potential to confer significant advantages to mining and drilling operations for mineral and energy resources. Nevertheless, before these benefits can be realized, a better understanding of these processes is required to improve their deployment in the field. In this paper, we employ a recently developed model of the grain-scale processes involved in electropulse stimulation to examine excavation of hard rock under realistic operating conditions. To that end, we investigate the maximum applied voltage within ranges of 120~kV to 600~kV, to observe the onset of rock fragmentation. We further study the effect of grain size on rock breakage, by comparing fine and coarse grained rocks modeled after granodiorite and granite, respectively. Lastly, the pore fluid salinity is investigated, since the electric conductivity of the pore fluid is shown to be a governing factor for the electrical conductivity of the system. This study demonstrates that all investigated factors are crucial to the efficiency of rock fragmentation by electropulsing.
Ma, X., M.O. Saar, and L.-S. Fan, Coulomb Criterion - Bounding Crustal Stress Limit and Intact Rock Failure: Perspectives, Powder Technology, 374, pp. 106-110, 2020. [Download PDF] [View Abstract]In this perspective article, we illustrate the importance and versatility of the Coulomb criterion that serves as a bridge between the fields of powder technology and rock mechanics/geomechanics. We first describe the essence of the Coulomb criterion and its physical meaning, revealing surprising similarities regarding its applica- tions between both fields. We then discuss the rock mechanics applications and limitations at two extreme scales, the Earth's crust (tens of kilometers) and intact rocks (meters). We finally offer thoughts on bridging these scales. The context of the article is essential not only to the rock mechanics/geomechanics community but also to a broader powder technology community.
Vogler, D., S.D.C. Walsh, Ph. Rudolf von Rohr, and M.O. Saar, Simulation of rock failure modes in thermal spallation drilling, Acta Geotechnica, 15/8, pp. 2327-2340, 2020. [Download PDF] [View Abstract]Thermal spallation drilling is a contact-less means of borehole excavation that works by exposing a rock surface to a high-temperature jet flame. In this study, we investigate crucial factors for the success of such thermal drilling operations using numerical simulations of the thermomechanical processes leading to rock failure at the borehole surface. To that end, we integrate a model developed for spalling failure with our thermomechanical simulations. In particular, we consider the role of material heterogeneities, maximum jet-flame temperature and maximum jet-flame temperature rise time on the onset of inelastic deformation and subsequent damage. We further investigate differences in energy consumption for the studied system configurations. The simulations highlight the importance of material composition, as thermal spallation is favored in fine-grained material with strong material heterogeneity. The model is used to test the relationship between the jet-flame temperature and the onset of thermal spallation.
von Planta, C., D. Vogler, P. Zulian, M.O. Saar, and R. Krause, Contact between rough rock surfaces using a dual mortar method, International Journal of Rock Mechanics and Mining Sciences (IJRMMS), 133, pp. 104414, 2020. [Download PDF] [View Abstract]The mechanical behavior of fractures in rocks has strong implications for reser- voir engineering applications. Deformations, and the corresponding change in contact area and aperture field, impact rock fracture stiffness and permeability, thus altering the reservoir properties significantly. Simulating contact between fractures is numerically difficult as the non-penetration constraints lead to a nonlinear problem and the surface meshes of the solid bodies on the opposing fracture sides may be non-matching. Furthermore, due to the complex geome- try, the non-penetration constraints must be updated throughout the solution procedure. Here we present a novel implementation of a dual mortar method for contact. It uses a non-smooth sequential quadratic programming method as solver, and is suitable for parallel computing. We apply it to a two body con- tact problem consisting of realistic rock fracture geometries from the Grimsel underground laboratory in Switzerland. The contributions of this article are: 1) a novel, parallel implementation of a dual mortar and non-smooth sequential quadratic programming method, 2) realistic rock geometries with rough sur- faces, and 3) numerical examples, which prove that the dual mortar method is capable of replicating the nonlinear closure behavior of fractures observed in laboratory experiments.
Ma, J., L. Querci, B. Hattendorf, M.O. Saar, and X.-Z. Kong, The effect of mineral dissolution on the effective stress law for permeability in a tight sandstone, Geophysical Research Letters, 2020. [Download PDF] [View Abstract]We present flow-through experiments to delineate the processes involved in permeability changes driven by effective stress variations and mineral cement dissolution in porous rocks. CO2-enriched brine is injected continuously into a tight sandstone under in-situ reservoir conditions for 455 hours. Due to the dolomite cement dissolution, the bulk permeability of the sandstone specimen significantly increases, and two dissolution passages are identified near the fluid inlet by X-ray CT imaging. Pre- and post-reaction examinations of the effective stress law for permeability suggest that after reaction the bulk permeability is more sensitive to pore pressure changes and less sensitive to effective stress changes. These observations are corroborated by Scanning Electron Microscopy and X-ray CT observations. This study deepens our understanding of the effect of mineral dissolution on the effective stress law for permeability, with implications for characterizing subsurface mass and energy transport, particularly during fluid injection/production into/from geologic reservoirs.
Fleming, M.R., B.M. Adams, T.H. Kuehn, J.M. Bielicki, and M.O. Saar, Increased Power Generation due to Exothermic Water Exsolution in CO2 Plume Geothermal (CPG) Power Plants, Geothermics, 88/101865, 2020. [Download PDF] [View Abstract]A direct CO2-Plume Geothermal (CPG) system is a novel technology that uses captured and geologically stored CO2 as the subsurface working uid in sedimentary basin reservoirs to extract geothermal energy. In such a CPG system, the CO2 that enters the production well is likely saturated with H2O from the geothermal reser- voir. However, direct CPG models thus far have only considered energy production via pure (i.e. dry) CO2 in the production well and its direct conversion in power generation equipment. Therefore, we analyze here, how the wellhead uid pressure, temperature, liquid water fraction, and the resultant CPG turbine power output are impacted by the production of CO2 saturated with H2O for reservoir depths ranging from 2.5 km to 5.0 km and geothermal temperature gradients between 20 °C/km and 50 °C/km. We demonstrate that the H2O in solution is exothermically exsolved in the vertical well, increasing the uid temperature relative to dry CO2, resulting in the production of liquid H2O at the wellhead. The increased wellhead uid temperature increases the turbine power output on average by 15% to 25% and up to a maximum of 41%, when the water enthalpy of exsolution is considered and the water is (conservatively) removed before the turbine, which decreases the uid mass ow rate through the turbine and thus power output. We show that the enthalpy of exsolution and the CO2-H2O so- lution density are fundamental components in the calculation of CPG power generation and thus should not be neglected or substituted with the properties of dry CO2.
Tutolo, B., A. Luhmann, X.-Z. Kong, B. Bagley, D. Alba-Venero, N. Mitchell, M.O. Saar, and W.E. Seyfried, Contributions of visible and invisible pores to reactive transport in dolomite, Geochemical Perspectives Letters, I4, pp. 42-46, 2020. [Download PDF] [View Abstract]Recent technical advances have demonstrated the importance of pore-scale geochemical processes for governing Earth's evolution. However, the contribution of pores at different scales to overall geochemical reactions remains poorly understood. Here, we integrate multiscale characterization and reactive transport modeling to study the contribution of pore-scale geochemical proceses to the hydrogeochemical evolution of dolomite rock samples during CO2-driven dissolution experiments. Our results demonstrate that approximately half of the total pore volume is invisible at the scale of commonly used imaging techniques. Comparison of pre- and post-experiment analyses demonstrate that porosity-increasing, CO2-driven dissolution processes preferentially occur in pores 600 nm – 5 μm in size, but pores <600 nm in size show no change during experimental alteration. This latter observation, combined with the anomalously high rates of trace element release during the experiments, suggests that nanoscale pores are accessible to through-flowing fluids. A three-dimensional simulation performed directly on one of the samples shows that steady-state pore-scale trace element reaction rates must be ~10× faster than that of dolomite in order to match measured effluent concentrations, consistent with the large surface area-to-volume ratio in these pores. Together, these results yield a new conceptual model of pore-scale processes, and urge caution when interpreting the trace element concentrations of ancient carbonate rocks.
Kittilä, A., M.R. Jalali, M.O. Saar, and X.-Z. Kong, Solute tracer test quantification of the effects of hot water injection into hydraulically stimulated crystalline rock, Geothermal Energy, 8/17, 2020. [Download PDF] [View Abstract]When water is injected into a fracture-dominated reservoir that is cooler or hotter than the injected water, the reservoir permeability is expected to be altered by the injection-induced thermo-mechanical effects, resulting in the redistribution of fluid flow in the reservoir. These effects are important to be taken into account when evaluating the performance and lifetime particularly of Enhanced Geothermal Systems (EGS). In this paper, we compare the results from two dye tracer tests, conducted before (at ambient temperature of 13 °C) and during the injection of 45 °C hot water into a fractured crystalline rock at the Grimsel Test Site in Switzerland. Conducting a moment analysis on the recovered tracer residence time distribution (RTD) curves, we observe, after hot water injection, a significant decrease in the total tracer recovery. This recovery decrease strongly suggests that fluid flow was redistributed in the studied rock volume and that the majority of the injected water was lost to the far-field. Furthermore, by using temperature measurements, obtained from the same locations as the tracer RTD curves, we conceptualize an approach to estimate the fracture surface area contributing to the heat exchange between the host rock and the circulating fluid. Our moment analysis and simplified estimation of fracture surface area provide insights into the hydraulic properties of the hydraulically active fracture system and the changes in fluid flow. Such insights are important to assess the heat exchange performance of a geothermal formation during fluid circulation and to estimate the lifetime of the geothermal formation, particularly in EGS.
Ezekiel, J., A. Ebigbo, B. M. Adams, and M. O. Saar, Combining natural gas recovery and CO2-based geothermal energy extraction for electric power generation, Applied Energy, 269/115012, 2020. [Download PDF] [View Abstract]We investigate the potential for extracting heat from produced natural gas and utilizing supercritical carbon dioxide (CO2) as a working uid for the dual purpose of enhancing gas recovery (EGR) and extracting geo- thermal energy (CO2-Plume Geothermal – CPG) from deep natural gas reservoirs for electric power generation, while ultimately storing all of the subsurface-injected CO2. Thus, the approach constitutes a CO2 capture double- utilization and storage (CCUUS) system. The synergies achieved by the above combinations include shared infrastructure and subsurface working uid. We integrate the reservoir processes with the wellbore and surface power-generation systems such that the combined system's power output can be optimized. Using the subsurface uid ow and heat transport simulation code TOUGH2, coupled to a wellbore heat-transfer model, we set up an anticlinal natural gas reservoir model and assess the technical feasibility of the proposed system. The simulations show that the injection of CO2 for natural gas recovery and for the establishment of a CO2 plume (necessary for CPG) can be conveniently combined. During the CPG stage, following EGR, a CO2-circulation mass owrate of 110 kg/s results in a maximum net power output of 2 MWe for this initial, conceptual, small system, which is scalable. After a decade, the net power decreases when thermal breakthrough occurs at the production wells. The results con rm that the combined system can improve the gas eld's overall energy production, enable CO2 sequestration, and extend the useful lifetime of the gas eld. Hence, deep (partially depleted) natural gas re- servoirs appear to constitute ideal sites for the deployment of not only geologic CO2 storage but also CPG.
Rossi, E. , M.O. Saar, and Ph. Rudolf von Rohr, The influence of thermal treatment on rock-bit interaction: a study of a combined thermo-mechanical drilling (CTMD) concept, Geothermal Energy, 8/16, 2020. [Download PDF] [View Abstract]To improve the economics and viability of accessing deep georesources, we propose a combined thermo–mechanical drilling (CTMD) method, employing a heat source to facilitate the mechanical removal of rock, with the aim of increasing drilling performance and thereby reducing the overall costs, especially for deep wells in hard rocks. In this work, we employ a novel experiment setup to investigate the main parameters of interest during the interaction of a cutter with the rock material, and we test untreated and thermally treated sandstone and granite, to understand the underlying rock removal mechanism and the resulting drilling performance improvements achievable with the new approach. We find that the rock removal process can be divided into three main regimes: first, a wear-dominated regime, followed by a compression-based progression of the tool at large penetrations, and a final tool fall-back regime for increasing scratch distances. We calculate the compressive rock strengths from our tests to validate the above regime hypothesis, and they are in good agreement with literature data, explaining the strength reduction after treatment of the material by extensive induced thermal cracking of the rock. We evaluate the new method's drilling performance and confirm that thermal cracks in the rock can considerably enhance subsequent mechanical rock removal rates and related drilling performance by one order of magnitude in granite, while mainly reducing the wear rates of the cutting tools in sandstone.
Rossi, E., S. Jamali, V. Wittig, M.O. Saar, and Ph. Rudolf von Rohr, A combined thermo-mechanical drilling technology for deep geothermal and hard rock reservoirs, Geothermics, 85/101771, 2020. [Download PDF] [View Abstract]Combined thermo-mechanical drilling is a novel technology to enhance drilling performance in deep hard rock formations. In this work, we demonstrate this technology in the field by implementing the concept on a full-scale drilling rig, and we show its feasibility under realistic process conditions. We provide evidence that the novel drilling method can increase the removal performance in hard rocks by up to a factor of three, compared to conventional drilling methods. From the findings of this work, we conclude that integration of thermal assistance to conventional rotary drilling constitutes an interesting approach to facilitate the drilling process, and therefore increase the access viability to deep georesources in hard rocks.
Rossi, E., S. Jamali, M.O. Saar, and Ph. Rudolf von Rohr, Field test of a Combined Thermo-Mechanical Drilling technology. Mode I: Thermal spallation drilling, Journal of Petroleum Science and Engineering, 190/107005, 2020. [Download PDF] [View Abstract]Accessing hydrocarbons, geothermal energy and mineral resources requires more and more drilling to great depths and into hard rocks, as many shallow resources in soft rocks have been mined already. Drilling into hard rock to great depths, however, requires reducing the effort (i.e., energy), time (i.e., increasing the rate of penetration) and cost associated with such operations. Thus, a Combined Thermo-Mechanical Drilling (CTMD) technology is proposed, which employs a heat source (e.g., a flame jet) and includes two main drilling modes: (I) Thermal spallation drilling, investigated here as a field test and (II) Flame-assisted rotary drilling, investigated as a field test in the companion paper. The CTMD technology is expected to reduce drilling costs, especially in hard rocks, by enhancing the rock penetration rate and increasing the bit lifetime. Mode I of the CTMD technology (thermal spallation drilling) is investigated here by implementing the concept on a full-scale drilling rig to investigate its feasibility and performance under realistic field conditions. During the test, the successful thermal spallation process is monitored, employing a novel acoustic emission system. The effects of thermal spallation in the granite rock are analyzed to provide conclusions regarding the rock removal performance and the application potential of the technology. The field test shows that thermal spallation of the granitic rock can be successfully achieved even when a liquid (water) is used as the drilling fluid, as long as the heat source is appropriately shielded by compressed-air jets. Thermal damage of the surrounding rock is investigated after the spallation test, employing micro-computer tomography imaging and modeling the stability of the cracks, generated by the spallation field test. This study shows that thermally induced damage is mainly confined within a narrow region close to the rock surface, suggesting that thermal spallation only marginally affects the overall mechanical stability of the borehole. Thus, this confirms that, as part of the Combined Thermo- Mechanical Drilling (CTMD) technology, thermal spallation drilling is a promising mode that has a high potential of facilitating the drilling of deep boreholes in hard rocks.
Rossi, E., S. Jamali, D. Schwarz, M.O. Saar, and Ph. Rudolf von Rohr, Field test of a Combined Thermo-Mechanical Drilling technology. Mode II: Flame-assisted rotary drilling, Journal of Petroleum Science and Engineering, 190/106880, 2020. [Download PDF] [View Abstract]To enhance the drilling performance in deep hard rocks and reduce overall drilling efforts, this work proposes a Combined Thermo-Mechanical Drilling (CTMD) technology. This technology employs a heat source (e.g., a flame jet) and includes two main drilling modes: (I) thermal spallation drilling, investigated in the companion paper and (II) flame-assisted rotary drilling, investigated here as a field test. The CTMD technology is expected to reduce drilling efforts, especially in hard rocks, enhancing the rock penetration rate and increasing the bit lifetime, all of which reduces the drilling costs. The present work investigates Mode II (flame-assisted rotary drilling) of the CTMD technology by implementing the concept in an existing drilling rig and testing the technology under relevant process conditions. This contribution studies the underlying rock removal mechanism of CTMD and demonstrates its drilling performance, compared to conventional rotary drilling methods. Acoustic emission monitoring, and analysis of the collected drill cuttings provide multiple evidences for thermal-cracking-enhanced rock removal during the flame-assisted rotary drilling. This removal mechanism appears to represent an optimal compromise to minimize rock fragmentation and cutting transport efforts during drilling, compared to a less efficient mechanical scraping of the hard granite rock, observed during the standalone-mechanical drill test. The drilling performance, in terms of removal and wear rates, are evaluated for the flame-assisted rotary drilling. This shows that the proposed drilling approach is capable of enhancing the removal process in hard granite rock, by a factor of 2.5, compared to standalone-mechanical drilling. The implementation of this drilling approach into a conventional drilling system shows that integration of thermal assistance to conventional rotary drilling requires marginal technical efforts. Additionally, this technology can profit from established knowledge in conventional mechanical drilling, facilitating its implementation to improve drilling performance in hard rocks. Hence, this study demonstrates that the Combined Thermo- Mechanical Drilling method is feasible and concludes that this technology constitutes a promising approach to improve the drilling process, thereby increasing the viability of accessing deep geo-resources in hard rocks.
Kittilä, A., M.R. Jalali, M. Somogyvári, K.F. Evans, M.O. Saar, and X.-Z. Kong, Characterization of the effects of hydraulic stimulation with tracer-based temporal moment analysis and tomographic inversion, Geothermics, 86/101820, 2020. [Download PDF] [View Abstract]Tracer tests were conducted as part of decameter-scale in-situ hydraulic stimulation experiments at the Grimsel Test Site to investigate the hydraulic properties of a stimulated crystalline rock volume and to study the stimulation-induced hydrodynamic changes. Temporal moment analysis yielded an increase in tracer swept pore volume with prominent flow channeling. Post-stimulation tomographic inversion of the hydraulic conductivity, K, distribution indicated an increase in the geometric mean of logK and a decrease in the Dykstra-Parsons heterogeneity index. These results indicate that new flow path connections were created by the stimulation programs, enabling the tracers to sweep larger volumes, while accessing flow paths with larger hydraulic conductivities.
Nejati, M., A. Aminzadeh, T. Driesner, and M.O. Saar, On the directional dependency of Mode I fracture toughness in anisotropic rocks, Theoretical and Applied Fracture Mechanics, 107/102494, 2020. [Download PDF] [View Abstract]This paper presents a theoretical and experimental analysis of the directional variations of different measures of Mode fracture toughness in anisotropic rocks and possibly other types of solids. We report the theoretical basis for the directional dependence of three measures of fracture toughness: the critical stress intensity factor, the critical energy release rate and the critical strain energy density. The equivalency of these three measures in anisotropic materials is discussed. We then provide a full set of experimental results on the fracture toughness variation in an anisotropic rock that exhibits transverse isotropy. The results give supporting evidence that the critical Mode stress intensity factor in fact varies with direction based on a sinusoidal function. This indicates that there exist two principal values of the fracture toughness along with the principal material directions within the plane. Once these two principal values are determined, all three measures of the fracture toughness can be predicted in any direction, provided that the elastic constants of the material are known, and that the symmetry condition employed in this analysis is fulfilled.
von Planta, C., D. Vogler, X. Chen, M.G.C. Nestola, M.O. Saar, and R. Krause, Modelling of hydro-mechanical processes in heterogeneous fracture intersections using a fictitious domain method with variational transfer operators, Computational Geosciences, 2020. [Download PDF] [View Abstract]Fluid flow in rough fractures and the coupling with the mechanical behavior of the fractures pose great difficulties for numerical modeling approaches, due to complex fracture surface topographies, the non-linearity of hydromechanical processes and their tightly coupled nature. To this end, we have adapted a fictitious domain method to enable the simulation of hydromechanical processes in fracture-intersections. The main characteristic of the method is the immersion of the fracture domain, modelled as a linear elastic solid, in the surrounding fluid, modelled with the incompressible Navier Stokes equations. The fluid and the solid problems are coupled with variational transfer operators. Variational transfer operators are also used to solve contact within the fracture using a mortar approach and to generate problem specific fluid grids. With respect to our applications, the key features of the method are the usage of different finite element discretizations for the solid and the fluid problem and the automatically generated representation of the fluid-solid boundary. We demonstrate that the presented methodology resolves small-scale roughness on the fracture surface, while capturing fluid flow field changes during mechanical loading. Starting with 2D/3D benchmark simulations of intersected fractures, we end with an intersected fracture composed of complex fracture surface topographies, which are in contact under increasing loads. The contributions of this article are: (1) the application of the fictitious domain method to study flow in fractures with intersections, (2) a mortar based contact solver for the solid problem, (3) generation of problem specific grids using the geometry information from the variational transfer operators.
Ahkami, M., A. Parmigiani, P.R. Di Palma, M.O. Saar, and X.-Z. Kong, A lattice-Boltzmann study of permeability-porosity relationships and mineral precipitation patterns in fractured porous media, Computational Geosciences, 2020. [Download PDF] [View Abstract]Mineral precipitation can drastically alter a reservoir's ability to transmit mass and energy during various engineering/natural subsurface processes, such as geothermal energy extraction and geological carbon dioxide sequestration. However, it is still challenging to explain the relationships among permeability, porosity, and precipitation patterns in reservoirs, particularly in fracture-dominated reservoirs. Here, we investigate the pore-scale behavior of single-species mineral precipitation reactions in a fractured porous medium, using a phase field lattice-Boltzmann method. Parallel to the main flow direction, the medium is divided into two halves, one with a low-permeability matrix and one with a high-permeability matrix. Each matrix contains one flow-through and one dead-end fracture. A wide range of species diffusivity and reaction rates is explored to cover regimes from advection- to diffusion-dominated, and from transport- to reaction-limited. By employing the ratio of the Damköhler (Da) and the Peclet (Pe) number, four distinct precipitation patterns can be identified, namely (1) no precipitation (Da/Pe < 1), (2) near-inlet clogging (Da/Pe > 100), (3) fracture isolation (1 < Da/Pe < 100 and Pe > 1), and (4) diffusive precipitation (1 < Da/Pe < 100 and Pe < 0.1). Using moment analyses, we discuss in detail the development of the species (i.e., reactant) concentration and mineral precipitation fields for various species transport regimes. Finally, we establish a general relationship among mineral precipitation pattern, porosity, and permeability. Our study provides insights into the feedback loop of fluid flow, species transport, mineral precipitation, pore space geometry changes, and permeability in fractured porous media.
Nejati, M., M.L.T. Dambly, and M.O. Saar, A methodology to determine the elastic properties of anisotropic rocks from a single uniaxial compression test, Journal of Rock Mechanics and Geotechnical Engineering, 11/6, pp. 1166-1183, 2019. [Download PDF] [View Abstract]This paper introduces a new methodology to measure the elastic constants of transversely isotropic rocks from a single uniaxial compression test. We first give the mathematical proof that a uniaxial compression test provides only four independent strain equations. As a result, the exact determination of all five independent elastic constants from only one test is not possible. An approximate determination of the Young's moduli and the Poisson's ratios is however practical and efficient when adding the Saint–Venant relation as the fifth equation. Explicit formulae are then developed to calculate both secant and tangent definitions of the five elastic constants from a minimum of four strain measurements. The results of this new methodology applied on three granitic samples demonstrate a significant stress-induced nonlinear behavior, where the tangent moduli increase by a factor of three to four when the rock is loaded up to 20 MPa. The static elastic constants obtained from the uniaxial compression test are also found to be significantly smaller than the dynamic ones obtained from the ultrasonic measurements.
Ahkami, M., T. Roesgen, M.O. Saar, and X.-Z. Kong, High-resolution temporo-ensemble PIV to resolve pore-scale flow in 3D-printed fractured porous media, Transport in Porous Media, 129/2, pp. 467-483, 2019. [Download PDF] [View Abstract]Fractures are conduits that can enable fast advective transfer of (fluid, solute, reactant, particle, etc.) mass and energy. Such fast transfer can significantly affect pore-scale physico-chemical processes, which can in turn affect macroscopic mass and energy transport characteristics. Here, flooding experiments are conducted in a well-characterized fractured porous medium, manufactured by 3D printing. Given steady-state flow conditions, the micro-structure of the two-dimensional (2D) pore fluid flow field is delineated to resolve fluid velocities on the order of a sub-millimeter per second. We demonstrate the capabilities of a new temporo-ensemble Particle Image Velocimetry (PIV) method by maximizing its spatial resolution, employing in-line illumination. This method is advantageous as it is capable of minimizing the number of pixels, required for velocity determinations, down to one pixel, thereby enabling resolving high spatial resolutions of velocity vectors in a large field of view (FOV). While the main goal of this study is to introduce a novel experimental and velocimetry framework, this new method is then applied to specifically improve the understanding of fluid flow through fractured porous media. Histograms of measured velocities indicate log-normal and Gaussian-type distributions of longitudinal and lateral velocities in fractures, respectively. The magnitudes of fluid velocities in fractures and the flow interactions between fractures and matrices are shown to be influenced by the permeability of the background matrix and the orientation of the fractures.
von Planta, C., D. Vogler, X. Chen, M.G.C. Nestola, M.O. Saar, and R. Krause, Simulation of hydro-mechanically coupled processes in rough rock fractures using an immersed boundary method and variational transfer operators, Computational Geosciences, 23/5, pp. 1125-1140, 2019. [Download PDF] [View Abstract]Hydro-mechanical processes in rough fractures are highly non-linear and govern productivity and associated risks in a wide range of reservoir engineering problems. To enable high-resolution simulations of hydro-mechanical processes in fractures, we present an adaptation of an immersed boundary method to compute fluid flow between rough fracture surfaces. The solid domain is immersed into the fluid domain and both domains are coupled by means of variational volumetric transfer operators. The transfer operators implicitly resolve the boundary between the solid and the fluid, which simplifies the setup of fracture simulations with complex surfaces. It is possible to choose different formulations and discretization schemes for each subproblem and it is not necessary to remesh the fluid grid. We use benchmark problems and real fracture geometries to demonstrate the following capabilities of the presented approach: (1) resolving the boundary of the rough fracture surface in the fluid; (2) capturing fluid flow field changes in a fracture which closes under increasing normal load; and (3) simulating the opening of a fracture due to increased fluid pressure.
Ma, J., L. Querci, B. Hattendorf, M.O. Saar, and X.-Z. Kong, Toward a Spatiotemporal Understanding of Dolomite Dissolution in Sandstone by CO2‑Enriched Brine Circulation, Environmental Science & Technology, 2019. [Download PDF] [View Abstract]In this study, we introduce a stochastic method to delineate the mineral effective surface area (ESA) evolution during a re-cycling reactive flow-through transport experiment on a sandstone under geologic reservoir conditions, with a focus on the dissolution of its dolomite cement, Ca$_{1.05}$Mg$_{0.75}$Fe$_{0.2}$(CO$_3$)$_2$. CO$_2$-enriched brine was circulated through this sandstone specimen for 137 cycles ($\sim$270 hours) to examine the evolution of in-situ hydraulic properties and CO$_2$-enriched brine-dolomite geochemical reactions. The bulk permeability of the sandstone specimen decreased from 356 mD before the reaction to 139 mD after the reaction, while porosity increased from 21.9\% to 23.2\% due to a solid volume loss of 0.25 ml. Chemical analyses on experimental effluents during the first cycle yielded a dolomite reactivity of $\sim$2.45 mmol~m$^{-3}$~s$^{-1}$, a corresponding sample-averaged ESA of $\sim$8.86$\times 10^{-4}$~m$^2$/g, and an ESA coefficient of 1.36$\times 10^{-2}$, indicating limited participation of the physically exposed mineral surface area. As the dissolution reaction progressed, the ESA is observed to first increase, then decrease. This change in ESA can be qualitatively reproduced employing SEM-image-based stochastic analyses on dolomite dissolution. These results provide a new approach to analyze and upscale the ESA during geochemical reactions, which are involved in a wide range of geo-engineering operations.
Lima, M.M., D. Vogler, L. Querci, C. Madonna, B. Hattendorf, M.O. Saar, and X.-Z. Kong, Thermally driven fracture aperture variation in naturally fractured granites, Geothermal Energy Journal, 7/1, pp. 1-23, 2019. [Download PDF] [View Abstract]Temperature variations often trigger coupled thermal, hydrological, mechanical, and chemical (THMC) processes that can significantly alter the permeability/impedance of fracture-dominated deep geological reservoirs. It is thus necessary to quantitatively explore the associated phenomena during fracture opening and closure as a result of temperature change. In this work, we report near-field experimental results of the effect of temperature on the hydraulic properties of natural fractures under stressed conditions (effective normal stresses of 5-25 MPa). Two specimens of naturally fractured granodiorite cores from the Grimsel Test Site in Switzerland were subjected to flow-through experiments with a temperature variation of 25-140 °C to characterize the evolution of fracture aperture/permeability. The fracture surfaces of the studied specimens were morphologically characterized using photogrammetry scanning. Periodic measurements of the efflux of dissolved minerals yield the net removal mass, which is correlated to the observed rates of fracture closure. Changes measured in hydraulic aperture are significant, exhibiting reductions of 20-75 % over the heating/cooling cycles. Under higher confining stresses, the effects in fracture permeability are irreversible and notably time-dependent. Thermally driven fracture aperture variation was more pronounced in the specimen with the largest mean aperture width and spatial correlation length. Gradual fracture compaction is likely controlled by thermal dilation, mechanical grinding, and pressure dissolution due to increased thermal stresses exerted over the contacting asperities, as confirmed by the analyses of hydraulic properties and efflux mass.
Myre, J.M., I. Lascu, E.A. Lima, J.M. Feinberg, M.O. Saar, and B.P. Weiss, Using TNT-NN to Unlock the Fast Full Spatial Inversion of Large Magnetic Microscopy Datasets, Earth Planets and Space, 71/14, 2019. [Download PDF] [View Abstract]Modern magnetic microscopy (MM) provides high-resolution, ultra-high sensitivity moment magnetometry, with the ability to measure at spatial resolutions better than 10^−4 m and to detect magnetic moments weaker than 10^−15 Am^2 . These characteristics make modern MM devices capable of particularly high resolution analysis of the magnetic properties of materials, but generate extremely large data sets. Many studies utilizing MM attempt to solve an inverse problem to determine the magnitude of the magnetic moments that produce the measured component of the magnetic field. Fast Fourier techniques in the frequency domain and non-negative least-squares (NNLS) methods in the spatial domain are the two most frequently used methods to solve this inverse problem. Although extremely fast, Fourier techniques can produce solutions that violate the non-negativity of moments constraint. Inversions in the spatial domain do not violate non-negativity constraints, but the execution times of standard NNLS solvers (the Lawson and Hanson method and Matlab's lsqlin) prohibit spatial domain inversions from operating at the full spatial resolution of an MM. In this paper we present the applicability of the TNT-NN algorithm, a newly developed NNLS active set method, as a means to directly address the NNLS routine hindering existing spatial domain inversion methods. The TNT-NN algorithm enhances the performance of spatial domain inversions by accelerating the core NNLS routine. Using a conventional computing system, we show that the TNT-NN algorithm produces solutions with residuals comparable to conventional methods while reducing execution time of spatial domain inversions from months to hours or less. Using isothermal remanent magnetization measurements of multiple synthetic and natural samples, we show that the capabilities of the TNT-NN algorithm allow scans with sizes that made them previously inaccesible to NNLS techniques to be inverted. Ultimately, the TNT- NN algorithm enables spatial domain inversions of MM data on an accelerated timescale that renders spatial domain analyses for modern MM studies practical. In particular, this new technique enables MM experiments that would have required an impractical amount of inversion time such as high-resolution stepwise magnetization and demagnetization and 3-dimensional inversions.
Schädle, P., P. Zulian, D. Vogler, S. Bhopalam R., M.G.C. Nestola, A. Ebigbo, R. Krause, and M.O. Saar, 3D non-conforming mesh model for flow in fractured porous media using Lagrange multipliers, Computers & Geosciences, 132, pp. 42-55, 2019. [Download PDF] [View Abstract]This work presents a modeling approach for single-phase flow in 3D fractured porous media with non-conforming meshes. To this end, a Lagrange multiplier method is combined with a parallel variational transfer approach. This Lagrange multiplier method enables the use of non-conforming meshes and depicts the variable coupling between fracture and matrix domain. The variational transfer allows general, accurate, and parallel projection of variables between non-conforming meshes (i.e. between fracture and matrix domain). Comparisons of simulations with 2D benchmarks show good agreement, and the applied finite element Lagrange multiplier spaces show good performance. The method is further evaluated on 3D fracture networks by comparing it to results from conforming mesh simulations which were used as a reference. Application to realistic fracture networks with hundreds of fractures is demonstrated. Mesh size and mesh convergence are investigated for benchmark cases and 3D fracture network applications. Results demonstrate that the Lagrange multiplier method, in combination with the variational transfer approach, is capable of modeling single-phase flow through realistic 3D fracture networks.
Dambly, M.L.T., M. Nejati, D. Vogler, and M.O. Saar, On the direct measurement of shear moduli in transversely isotropic rocks using the uniaxial compression test, International Journal of Rock Mechanics and Mining Sciences (IJRMMS), 113, pp. 220-240, 2019. [Download PDF] [View Abstract]This paper introduces a methodology for the direct determination of the shear moduli in transversely isotropic rocks, using a single test, where a cylindrical specimen is subjected to uniaxial compression. A method is also developed to determine the orientation of the isotropy plane as well as the dynamic elastic constants using ultrasonic measurements on a single cylindrical specimen. Explicit formulae are developed to calculate the shear moduli from strain gauge measurements at different polar angles. The calculation of shear moduli from these formulae requires no knowledge about Young's moduli or Poisson's ratios and depends only on the orientation of the isotropy plane. Several strain gauge setups are designed to obtain the shear moduli from different numbers and arrangements of strain gauges. We demonstrate, that the shear moduli can be determined accurately and efficiently with only three strain gauge measurements. The orientation of the isotropy plane is measured with different methods, including ultrasonic measurements. The results show, that the isotropy plane of the tested granitic samples slightly deviates from the foliation plane. However, the foliation plane can still determine the orientation of the isotropy plane with a good approximation.
Ogland-Hand, J.D., J.M. Bielicki, Y. Wang, B.M. Adams, T.A. Buscheck, and M.O. Saar, The value of bulk energy storage for reducing CO2 emissions and water requirements from regional electricity systems., Energy Conversion and Management, 181, pp. 674-685, 2019. [Download PDF] [View Abstract]The implementation of bulk energy storage (BES) technologies can help to achieve higher penetration and utilization of variable renewable energy technologies (e.g., wind and solar), but it can also alter the dispatch order in regional electricity systems in other ways. These changes to the dispatch order affect the total amount of carbon dioxide (CO2) that is emitted to the atmosphere and the amount of total water that is required by the electricity generating facilities. In a case study of the Electricity Reliability Council of Texas system, we separately investigated the value that three BES technologies (CO2- Geothermal Bulk Energy Storage, Compressed Air Energy Storage, Pumped Hydro Energy Storage) could have for reducing system-wide CO2 emissions and water requirements. In addition to increasing the utilization of wind power capacity, the dispatch of BES also led to an increase in the utilization of natural gas power capacity and of coal power capacity, and a decrease in the utilization of nuclear power capacity, depending on the character of the net load, the CO2 price, the water price, and the BES technology. These changes to the dispatch order provided positive value (e.g., increase in natural gas generally reduced CO2 emissions; decrease in nuclear utilization always decreased water requirements) or negative value (e.g., increase in coal generally increased CO2 emissions; increase in natural gas sometimes increased water requirements) to the regional electricity system. We also found that these values to the system can be greater than the cost of operating the BES facility. At present, there are mechanisms to compensate BES facilities for ancillary grid services, and our results suggest that similar mechanisms could be enacted to compensate BES facilities for their contribution to the environmental sustainability of the system.
Kittilä, A., M.R. Jalali, K.F. Evans, M. Willmann, M.O. Saar, and X.-Z. Kong, Field Comparison of DNA-Labeled Nanoparticle and Solute Tracer Transport in a Fractured Crystalline Rock, Water Resources Research, 2019. [Download PDF]
Kong, X.-Z., C. Deuber, A. Kittilä, M. Somogyvari, G. Mikutis, P. Bayer, W.J. Stark, and M.O. Saar, Tomographic reservoir imaging with DNA-labeled silica nanotracers: The first field validation, Environmental Science &Technology, 52/23, pp. 13681-13689, 2018. [Download PDF] [View Abstract]This study presents the first field validation of using DNA-labeled silica nanoparticles as tracers to image subsurface reservoirs by travel time based tomography. During a field campaign in Switzerland, we performed short-pulse tracer tests under a forced hydraulic head gradient to conduct a multisource−multireceiver tracer test and tomographic inversion, determining the two-dimensional hydraulic conductivity field between two vertical wells. Together with three traditional solute dye tracers, we injected spherical silica nanotracers, encoded with synthetic DNA molecules, which are protected by a silica layer against damage due to chemicals, microorganisms, and enzymes. Temporal moment analyses of the recorded tracer concentration breakthrough curves (BTCs) indicate higher mass recovery, less mean residence time, and smaller dispersion of the DNA-labeled nanotracers, compared to solute dye tracers. Importantly, travel time based tomography, using nanotracer BTCs, yields a satisfactory hydraulic conductivity tomogram, validated by the dye tracer results and previous field investigations. These advantages of DNA-labeled nanotracers, in comparison to traditional solute dye tracers, make them well-suited for tomographic reservoir characterizations in fields such as hydrogeology, petroleum engineering, and geothermal energy, particularly with respect to resolving preferential flow paths or the heterogeneity of contact surfaces or by enabling source zone characterizations of dense nonaqueous phase liquids.
Amann, F., V. Gischig, K.F. Evans, et al., A. Kittilä, S. Wiemer, M.O. Saar, S. Löw, Th. Driesner, H. Maurer, and D. Giardini, The seismo-hydro-mechanical behaviour during deep geothermal reservoir stimulations: open questions tackled in a decameter-scale in-situ stimulation experiment, Solid Earth, 9, pp. 115-137, 2018. [Download PDF] [View Abstract]In this contribution, we present a review of scientific research results that address seismo-hydromechanically coupled processes relevant for the development of a sustainable heat exchanger in low-permeability crystalline rock and introduce the design of the In situ Stimulation and Circulation (ISC) experiment at the Grimsel Test Site dedicated to studying such processes under controlled conditions. The review shows that research on reservoir stimulation for deep geothermal energy exploitation has been largely based on laboratory observations, large-scale projects and numerical models. Observations of full-scale reservoir stimulations have yielded important results. However, the limited access to the reservoir and limitations in the control on the experimental conditions during deep reservoir stimulations is insufficient to resolve the details of the hydromechanical processes that would enhance process understanding in a way that aids future stimulation design. Small-scale laboratory experiments provide fundamental insights into various processes relevant for enhanced geothermal energy, but suffer from (1) difficulties and uncertainties in upscaling the results to the field scale and (2) relatively homogeneous material and stress conditions that lead to an oversimplistic fracture flow and/or hydraulic fracture propagation behavior that is not representative of a heterogeneous reservoir. Thus, there is a need for intermediate-scale hydraulic stimulation experiments with high experimental control that bridge the various scales and for which access to the target rock mass with a comprehensive monitoring system is possible. The ISC experiment is designed to address open research questions in a naturally fractured and faulted crystalline rock mass at the Grimsel Test Site (Switzerland). Two hydraulic injection phases were executed to enhance the permeability of the rock mass. During the injection phases the rock mass deformation across fractures and within intact rock, the pore pressure distribution and propagation, and the microseismic response were monitored at a high spatial and temporal resolution.
Mikutis, G., C.A. Deuber, L. Schmid, A. Kittilä, N. Lobsiger, M. Puddu, D.O. Asgeirsson, R.N. Grass, M.O. Saar, and W.J. Stark, Silica-encapsulated DNA-based tracers for aquifer characterization, Environmental Science & Technology, 52, pp. 12142-12152, 2018. [Download PDF] [View Abstract]Environmental tracing is a direct way to characterize aquifers, evaluate the solute transfer parameter in underground reservoirs, and track contamination. By performing multitracer tests, and translating the tracer breakthrough times into tomographic maps, key parameters such as a reservoir's effective porosity and permeability field may be obtained. DNA, with its modular design, allows the generation of a virtually unlimited number of distinguishable tracers. To overcome the insufficient DNA stability due to microbial activity, heat, and chemical stress, we present a method to encapsulated DNA into silica with control over the particle size. The reliability of DNA quantification is improved by the sample preservation with NaN3 and particle redispersion strategies. In both sand column and unconsolidated aquifer experiments, DNA-based particle tracers exhibited slightly earlier and sharper breakthrough than the traditional solute tracer uranine. The reason behind this observation is the size exclusion effect, whereby larger tracer particles are excluded from small pores, and are therefore transported with higher average velocity, which is pore size-dependent. Identical surface properties, and thus flow behavior, makes the new material an attractive tracer to characterize sandy groundwater reservoirs or to track multiple sources of contaminants with high spatial resolution.
Hobé, A., D. Vogler, M.P. Seybold, A. Ebigbo, R.R. Settgast, and M.O. Saar, Estimating Fluid Flow Rates through Fracture Networks using Combinatorial Optimization, Advances in Water Resources, 122, pp. 85-97, 2018. [Download PDF] [View Abstract]To enable fast uncertainty quantification of fluid flow in a discrete fracture network (DFN), we present two approaches to quickly compute fluid flow in DFNs using combinatorial optimization algorithms. Specifically, the presented Hanan Shortest Path Maxflow (HSPM) and Intersection Shortest Path Maxflow (ISPM) methods translate DFN geometries and properties to a graph on which a max flow algorithm computes a combinatorial flow, from which an overall fluid flow rate is estimated using a shortest path decomposition of this flow. The two approaches are assessed by comparing their predictions with results from explicit numerical simulations of simple test cases as well as stochastic DFN realizations covering a range of fracture densities. Both methods have a high accuracy and very low computational cost, which can facilitate much-needed in-depth analyses of the propagation of uncertainty in fracture and fracture-network properties to fluid flow rates.
Rossi, E., M.A. Kant, C. Madonna, M.O. Saar, and Ph. Rudolf von Rohr, The Effects of High Heating Rate and High Temperature on the Rock Strength: Feasibility Study of a Thermally Assisted Drilling Method, Rock Mechanics and Rock Engineering, 51/9, pp. 2957-2964 , 2018. [Download PDF] [View Abstract]In this paper, the feasibility of a thermally assisted drilling method is investigated. The working principle of this method is based on the weakening effect of a flame-jet to enhance the drilling performance of conventional, mechanical drilling. To investigate its effectiveness, we study rock weakening after rapid, localized flame-jet heating of Rorschach sandstone and Central Aare granite. We perform experiments on rock strength after flame treatments in comparison to oven heating, for temperatures up to 650 \(^{\circ} \)C and heating rates from 0.17 to 20 \(^{\circ} \)C/s. The material hardening, commonly observed at moderate temperatures after oven treatments, can be suppressed by flame heating the material at high heating rates. Our study highlights the influence of the heating rate on the mechanism of thermal microcracking. High heating rate, flame treatments appear to mostly induce cracks at the grain boundaries, opposed to slow oven treatments, where also a considerable number of intragranular cracks are found. Herewith, we postulate that at low heating rates, thermal expansion stresses cause the observed thermal cracking. In contrast, at higher heating rates, thermal cracking is dominated by the stress concentrations caused by high thermal gradients.
Samrock, F., A.V. Grayver, H. Eysteinsson, and M.O. Saar, Magnetotelluric image of transcrustal magmatic system beneath the Tulu Moye geothermal prospect in the Ethiopian Rift, Geophysical Research Letters, 2018. [Download PDF] [View Abstract]Continental rifting is initiated by a dynamic interplay between tectonic stretching and mantle upwelling. Decompression melting assists continental break-up through lithospheric weakening and enforces upflow of melt to the Earth's surface. However, the details about melt transport through the brittle crust and storage under narrow rift-aligned magmatic segments remain largely unclear. Here we present a crustal scale electrical conductivity model for a magmatic segment in the Ethiopian Rift, derived from 3-D phase tensor inversion of magnetotelluric data. Our subsurface model shows that melt migrates along pre-existing weak structures and is stored in different concentrations on two major interconnected levels, facilitating the formation of a convective hydrothermal system. The obtained model of a transcrustal magmatic system offers new insights into rifting mechanisms, evolution of magma ascent, and prospective geothermal reservoirs.
Kant, M.A., E. Rossi, J. Duss, F. Amman, M.O. Saar, and P. Rudolf von Rohr, Demonstration of thermal borehole enlargement to facilitate controlled reservoir engineering for deep geothermal, oil or gas systems, Applied Energy, 212, pp. 1501-1509, 2018. [Download PDF] [View Abstract]The creation of deep reservoirs for geothermal energy or oil and gas extraction is impeded by insu cient stimulation. Direction and extension of the created fractures are complex to control and, therefore, large stimulated and interconnected fracture networks are di cult to create. This poses an inherent risk of un- economic reservoirs, due to insu cient heat-sweep surfaces or hydraulic shortcuts. Therefore, we present a new technique, which locally increases the cross section of a borehole by utilizing a thermal spallation process on the sidewalls of the borehole. By controlled and local enlargement of the well bore diameter, initial fracture sources are created, potentially reducing the injection pressure during stimulation, initiating fracture growth, optimizing fracture propagation and increasing the number of accessible preexisting frac- tures. Consequently, local thermal borehole enlargement reduces project failure risks by providing better control on subsequent stimulation processes. In order to show the applicability of the suggested technique, we conducted a shallow field test in an underground rock laboratory. Two types of borehole enlargements were created in a 14.5 m deep borehole, confirming that the technology is applicable, with implications for improving the productivity of geothermal, oil and gas reservoirs.
Walsh, S.D.C., N. Garapati, A.M.M. Leal, and M.O. Saar, Calculating thermophysical fluid properties during geothermal energy production with NESS and Reaktoro, Geothermics, 70, pp. 146-154, 2017. [Download PDF] [View Abstract]We investigate how subsurface fluids of different compositions affect the electricity generation of geothermal power plants. First, we outline a numerical model capable of accounting for the thermophysical properties of geothermal fluids of arbitrary composition within simulations of geothermal power production. The behavior of brines with varying compositions from geothermal sites around the globe are then examined using the model. The effect of each brine on an idealized binary geothermal power plant is simulated, and their performances compared by calculating the amount of heat exchanged from the fluid to the plant's secondary cycle. Our simulations combine (1) a newly developed Non-linear Equation System Solver (NESS), for simulating individual geothermal power plant components, (2) the advanced geochemical speciation solver, Reaktoro, used for calculation of thermodynamic fluid properties, and (3) compositional models for the calculation of fluid-dynamical properties (e.g., viscosity as a function of temperature and brine composition). The accuracy of the model is verified by comparing its predictions with experimental data from single-salt, binary-salt, and multiple-salt solutions. The geothermal power plant simulations show that the brines considered in this study can be divided into three main categories: (1) those of largely meteoric origin with low salinity for which the effect of salt concentration is negligible; (2) moderate-depth brines with high concentrations of Na+ and K+ ions, whose performance is well approximated by pure NaCl solutions of equivalent salinity; and (3) deeper, high-salinity brines that require a more detailed consideration of their composition for accurate simulation of plant operations.
Leal, A.M.M., D.A. Kulik, W.R. Smith, and M.O. Saar, An overview of computational methods for chemical equilibrium and kinetic calculations for geochemical and reactive transport modeling, Pure and Applied Chemistry, 89/5, pp. 597-643, 2017. [Download PDF] [View Abstract]We present an overview of novel numerical methods for chemical equilibrium and kinetic calculations for complex non-ideal multiphase systems. The methods we present for equilibrium calculations are based either on Gibbs energy minimization (GEM) calculations or on solving the system of extended law of mass-action (xLMA) equations. In both methods, no a posteriori phase stability tests, and thus no tentative addition or removal of phases during or at the end of the calculations, are necessary. All potentially stable phases are considered from the beginning of the calculation, and stability indices are immediately available at the end of the computation to determine which phases are actually stable at equilibrium. Both GEM and xLMA equilibrium methods are tailored for computationally demanding applications that require many rapid local equilibrium calculations, such as reactive transport modeling. The numerical method for chemical kinetic calculations we present supports both closed and open systems, and it considers a partial equilibrium simplification for fast reactions. The method employs an implicit integration scheme that improves stability and speed when solving the often stiff differential equations in kinetic calculations. As such, it requires compositional derivatives of the reaction rates to assemble the Jacobian matrix of the resultant implicit algebraic equations that are solved at every time step. We present a detailed procedure to calculate these derivatives, and we show how the partial equilibrium assumption affects their computation. These numerical methods have been implemented in Reaktoro (reaktoro.org), an open-source software for modeling chemically reactive systems. We finish with a discussion on the comparison of these methods with others in the literature.
Myre, J.M., E. Frahm, D.J. Lilja, and M.O. Saar, TNT-NN: A Fast Active Set Method for Solving Large Non-Negative Least Squares Problems, Procedia Computer Science, 108C, pp. 755-764, 2017. [Download PDF] [View Abstract]In 1974 Lawson and Hanson produced a seminal active set strategy to solve least-squares prob- lems with non-negativity constraints that remains popular today. In this paper we present TNT-NN, a new active set method for solving non-negative least squares (NNLS) problems. TNT-NN uses a different strategy not only for the construction of the active set but also for the solution of the unconstrained least squares sub-problem. This results in dramatically improved performance over traditional active set NNLS solvers, including the Lawson and Hanson NNLS algorithm and the Fast NNLS (FNNLS) algorithm, allowing for computational investigations of new types of scientific and engineering problems. For the small systems tested (5000 × 5000 or smaller), it is shown that TNT-NN is up to 95× faster than FNNLS. Recent studies in rock magnetism have revealed a need for fast NNLS algorithms to address large problems (on the order of 105 × 105 or larger). We apply the TNT- NN algorithm to a representative rock magnetism inversion problem where it is 60× faster than FNNLS. We also show that TNT-NN is capable of solving large (45000 × 45000) problems more than 150× faster than FNNLS. These large test problems were previously considered to be unsolvable, due to the excessive execution time required by traditional methods.
Luhmann, A.J., B.M. Tutolo, C. Tan, B.M. Moskowitz, M.O. Saar, and W.E. Seyfried, Jr., Whole rock basalt alteration from CO2-rich brine during flow-through experiments at 150°C and 150 bar, Chemical Geology, 453, pp. 92-110, 2017. [Download PDF] [View Abstract]Four flow-through experiments at 150 °C were conducted on intact cores of basalt to assess alteration and mass transfer during reaction with CO2-rich fluid. Two experiments used a flow rate of 0.1 ml/min, and two used a flow rate of 0.01 ml/min. Permeability increased for both experiments at the higher flow rate, but decreased for the lower flow rate experiments. The experimental fluid (initial pH of 3.3) became enriched in Si, Mg, and Fe upon passing through the cores, primarily from olivine and titanomagnetite dissolution and possibly pyroxene dissolution. Secondary minerals enriched in Al and Si were present on post-experimental cores, and an Fe2O3-rich phase was identified on the downstream ends of the cores from the experiments at the lower flow rate. While we could not specifically identify if siderite (FeCO3) was present in the post-experimental basalt cores, siderite was generally saturated or supersaturated in outlet fluid samples, suggesting a thermodynamic drive for Fe carbonation from basalt-H2O-CO2 reaction. Reaction path models that employ dissolution kinetics of olivine, labradorite, and enstatite also suggest siderite formation at low pH. Furthermore, fluid-rock interaction caused a relatively high mobility of the alkali metals; up to 29% and 99% of the K and Cs present in the core, respectively, were preferentially dissolved from the cores, likely due to fractional crystallization effects that made alkali metals highly accessible. Together, these datasets illustrate changes in chemical parameters that arise due to fluid-basalt interaction in relatively low pH environments with elevated CO2.
Luhmann, A.J., B.M. Tutolo, B.C. Bagley, D.F.R. Mildner, W.E. Seyfried Jr., and M.O. Saar, Permeability, porosity, and mineral surface area changes in basalt cores induced by reactive transport of CO2-rich brine, Water Resources Research, 53, pp. 1-20, 2017. [Download PDF] [View Abstract]Four reactive flow-through laboratory experiments (two each at 0.1 mL/min and 0.01 mL/min flow rates) at 150°C and 150 bar (15 MPa) are conducted on intact basalt cores to assess changes in porosity, permeability, and surface area caused by CO2-rich fluid-rock interaction. Permeability decreases slightly during the lower flow rate experiments and increases during the higher flow rate experiments. At the higher flow rate, core permeability increases by more than one order of magnitude in one experiment and less than a factor of two in the other due to differences in preexisting flow path structure. X-ray computed tomography (XRCT) scans of pre- and post-experiment cores identify both mineral dissolution and secondary mineralization, with a net decrease in XRCT porosity of ∼0.7%–0.8% for the larger pores in all four cores. (Ultra) small-angle neutron scattering ((U)SANS) data sets indicate an increase in both (U)SANS porosity and specific surface area (SSA) over the ∼1 nm to 10 µm scale range in post-experiment basalt samples, with differences due to flow rate and reaction time. Net porosity increases from summing porosity changes from XRCT and (U)SANS analyses are consistent with core mass decreases. (U)SANS data suggest an overall preservation of the pore structure with no change in mineral surface roughness from reaction, and the pore structure is unique in comparison to previously published basalt analyses. Together, these data sets illustrate changes in physical parameters that arise due to fluid-basalt interaction in relatively low pH environments with elevated CO2 concentration, with significant implications for flow, transport, and reaction through geologic formations.
Buscheck, T.A., J.M. Bielicki, T.A. Edmunds, Y. Hao, Y. Sun, J.B. Randolph, and M.O. Saar, Multifluid geo-energy systems: Using geologic CO2 storage for geothermal energy production and grid-scale energy storage in sedimentary basins, Geosphere, 12/3, pp. 1-19, 2016. [Download PDF] [View Abstract]We present an approach that uses the huge fluid and thermal storage capac ity of the subsurface, together with geologic carbon dioxide (CO 2 ) storage, to harvest, store, and dispatch energy from subsurface (geothermal) and surface (solar, nuclear, fossil) thermal resources, as well as excess energy on electric grids. Captured CO 2 is injected into saline aquifers to store pres - sure, generate artesian flow of brine, and provide a supplemental working fluid for efficient heat extraction and power conversion. Concentric rings of injection and production wells create a hydraulic mound to store pressure, CO 2 , and thermal energy. This energy storage can take excess power from the grid and excess and/or waste thermal energy and dispatch that energy when it is demanded, and thus enable higher penetration of variable renewable en - ergy technologies (e.g., wind and solar). CO 2 stored in the subsurface func - tions as a cushion gas to provide enormous pressure storage capacity and displace large quantities of brine, some of which can be treated for a variety of beneficial uses. Geo thermal power and energy-storage applications may generate enough revenues to compensate for CO 2 capture costs. While our ap - proach can use nitrogen (N 2 ), in addition to CO 2 , as a supplemental fluid, and store thermal energy, this study focuses on using CO 2 for geothermal energy production and grid-scale energy storage. We conduct a techno-economic assess ment to determine the levelized cost of electricity using this approach to generate geothermal power. We present a reservoir pressure management strategy that diverts a small portion of the produced brine for beneficial con - sumptive use to reduce the pumping cost of fluid recirculation, while reducing the risk of seismicity, caprock fracture, and CO 2 leakage.
Leal, A.M.M., D.A. Kulik, and M.O. Saar, Enabling Gibbs energy minimization algorithms to use equilibrium constants of reactions in multiphase equilibrium calculations, Chemical Geology, 437, pp. 170-181, 2016. [Download PDF] [View Abstract]The geochemical literature provides numerous thermodynamic databases compiled from equilibrium constants of reactions. These databases are typically used in speciation calculations based on the law of mass action (LMA) approach. Unfortunately, such LMA databases cannot be directly used in equilibrium speciation methods based on the Gibbs energy minimization (GEM) approach because of their lack of standard chemical potentials of species. Therefore, we present in this work a simple conversion approach that calculates apparent standard chemical potentials of species from equilibrium constants of reactions. We assess the consistency and accuracy of the use of apparent standard chemical potentials in GEM algorithms by benchmarking equilibrium speciation calculations using GEM and LMA methods with the same LMA database. In all cases, we use PHREEQC to perform the LMA calculations, and we use its LMA databases to calculate the equilibrium constants of reactions. GEM calculations are performed using a Gibbs energy minimization method of Reaktoro — a unified open-source framework for numerical modeling of chemically reactive systems. By comparing the GEM and LMA results, we show that the use of apparent standard chemical potentials in GEM methods produces consistent and accurate equilibrium speciation results, thus validating our new, practical conversion technique that enables GEM algorithms to take advantage of many existing LMA databases, consequently extending and diversifying their range of applicability.
Tutolo, B.M., D.F. Mildner, C.V. Gagnon, M.O. Saar, and W.E. Seyfried, Nanoscale constraints on porosity generation and fluid flow during serpentinization, Geology, 44/2, pp. 103-106, 2016. [Download PDF] [View Abstract]Field samples of olivine-rich rocks are nearly always serpentinized—commonly to completion—but, paradoxically, their intrinsic porosity and permeability are diminishingly low. Serpentinization reactions occur through a coupled process of fluid infiltration, volumetric expansion, and reaction-driven fracturing. Pores and reactive surface area generated during this process are the primary pathways for fluid infiltration into and reaction with serpentinizing rocks, but the size and distribution of these pores and surface area have not yet been described. Here, we utilize neutron scattering techniques to present the first measurements of the evolution of pore size and specific surface area distribution in partially serpentinized rocks. Samples were obtained from the ca. 2 Ma Atlantis Massif oceanic core complex located off-axis of the Mid-Atlantic Ridge and an olivine-rich outcrop of the ca. 1.1 Ga Duluth Complex of the North American Mid-Continent Rift. Our measurements and analyses demonstrate that serpentine and accessory phases form with their own, inherent porosity, which accommodates the bulk of diffusive fluid flow during serpentinization and thereby permits continued serpentinization after voluminous serpentine minerals fill reaction-generated porosity.
Leal, A.M.M., D. Kulik, G. Kosakowski, and M.O. Saar, Computational methods for reactive transport modeling: An extended law of mass-action, xLMA, method for multiphase equilibrium calculations, Advances in Water Resources, 96, pp. 405-422, 2016. [Download PDF] [View Abstract]We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
Luhmann, A.J., M. Covington, J. Myre, M. Perne, S.W. Jones, C.E. Alexander Jr., and M.O. Saar, Thermal damping and retardation in karst conduits, Hydrology and Earth System Sciences, 19/1, pp. 137-157, 2015. [Download PDF] [View Abstract]Water temperature is a non-conservative tracer in the environment. Variations in recharge temperature are damped and retarded as water moves through an aquifer due to heat exchange between water and rock. However, within karst aquifers, seasonal and short-term fluctuations in recharge temperature are often transmitted over long distances before they are fully damped. Using analytical solutions and numerical simulations, we develop relationships that describe the effect of flow path properties, flow-through time, recharge characteristics, and water and rock physical properties on the damping and retardation of thermal peaks/troughs in karst conduits. Using these relationships, one can estimate the thermal retardation and damping that would occur under given conditions with a given conduit geometry. Ultimately, these relationships can be used with thermal damping and retardation field data to estimate parameters such as conduit diameter. We also examine sets of numerical simulations where we relax some of the assumptions used to develop these relationships, testing the effects of variable diameter, variable velocity, open channels, and recharge shape on thermal damping and retardation to provide some constraints on uncertainty. Finally, we discuss a multitracer experiment that provides some field confirmation of our relationships. High temporal resolution water temperature data are required to obtain sufficient constraints on the magnitude and timing of thermal peaks and troughs in order to take full advantage of water temperature as a tracer.
Adams, B.M., T.H. Kuehn, J.M. Bielicki, J.B. Randolph, and M.O. Saar, A comparison of electric power output of CO2 Plume Geothermal (CPG) and brine geothermal systems for varying reservoir conditions, Applied Energy, 140, pp. 365-377, 2015. [Download PDF] [View Abstract]In contrast to conventional hydrothermal systems or enhanced geothermal systems, CO2 Plume Geothermal (CPG) systems generate electricity by using CO2 that has been geothermally heated due to sequestration in a sedimentary basin. Four CPG and two brine-based geothermal systems are modeled to estimate their power production for sedimentary basin reservoir depths between 1 and 5km, geothermal temperature gradients from 20 to 50°Ckm-1, reservoir permeabilities from 1×10-15 to 1×10-12m2 and well casing inner diameters from 0.14m to 0.41m. Results show that CPG direct-type systems produce more electricity than brine-based geothermal systems at depths between 2 and 3km, and at permeabilities between 10-14 and 10-13m2, often by a factor of two. This better performance of CPG is due to the low kinematic viscosity of CO2, relative to brine at those depths, and the strong thermosiphon effect generated by CO2. When CO2 is used instead of R245fa as the secondary working fluid in an organic Rankine cycle (ORC), the power production of both the CPG and the brine-reservoir system increases substantially; for example, by 22% and 20% for subsurface brine and CO2 systems, respectively, with a 35°Ckm-1 thermal gradient, 0.27m production and 0.41m injection well diameters, and 5×10-14m2 reservoir permeability.
Tutolo, B.M., A.T. Schaen, M.O. Saar, and W.E. Seyfried Jr., Implications of the redissociation phenomenon for mineral-buffered fluids and aqueous species transport at elevated temperatures and pressures, Applied Geochemistry, 55, pp. 119-127, 2015. [Download PDF] [View Abstract]Aqueous species equilibrium constants and activity models form the foundation of the complex speciation codes used to model the geochemistry of geothermal energy production, extremophilic ecosystems, ore deposition, and a variety of other processes. Researchers have shown that a simple three species model (i.e., Na+, Cl?, and NaCl(aq)) can accurately describe conductivity measurements of concentrated NaCl and KCl solutions at elevated temperatures and pressures (Sharygin et al., 2002). In this model, activity coefficients of the charged species (e.g., Na+, K+, Cl?) become sufficiently low that the complexes must redisocciate with increasing salt concentration in order to meet equilibrium constant constraints. Redissociation decreases the proportion of the elements bound up as neutral complexes, and thereby increases the true ionic strength of the solution. In this contribution, we explore the consequences of the redissociation phenomenon in albite–paragonite–quartz (APQ) buffered systems. We focus on the implications of the redissociation phenomenon for mineral solubilities, particularly the observation that, at certain temperatures and pressures, calculated activities of charged ions in solution remain practically constant even as element concentrations increase from <1 molal to 4.5 molal. Finally, we note that redissociation has a similar effect on pH, and therefore aqueous speciation, in APQ-hosted systems. The calculations and discussion presented here are not limited to APQ-hosted systems, but additionally apply to many others in which the dominant cations and anions can form neutral complexes.
Garapati, N., J.B. Randolph, and M.O. Saar, Brine displacement by CO2, energy extraction rates, and lifespan of a CO2-limited CO2-Plume Geothermal (CPG) system with a horizontal production well, Geothermics, 55, pp. 182-194, 2015. [Download PDF] [View Abstract]Several studies suggest that CO2-based geothermal energy systems may be operated economically when added to ongoing geologic CO2 sequestration. Alternatively, we demonstrate here that CO2-Plume Geothermal (CPG) systems may be operated long-term with a finite amount of CO2. We analyze the performance of such CO2-limited CPG systems as a function of various geologic and operational parameters. We find that the amount of CO2 required increases with reservoir depth, permeability, and well spacing and decreases with larger geothermal gradients. Furthermore, the onset of reservoir heat depletion decreases for increasing geothermal gradients and for both particularly shallow and deep reservoirs.
Tutolo, B.M., A.J. Luhmann, X.-Z. Kong, M.O. Saar, and W.E. Seyfried Jr., CO2 sequestration in feldspar-rich sandstone: Coupled evolution of fluid chemistry, mineral reaction rates, and hydrogeochemical properties, Geochimica Et Cosmochimica Acta, 160, pp. 132-154, 2015. [Download PDF] [View Abstract]To investigate CO2 Capture, Utilization, and Storage (CCUS) in sandstones, we performed three 150 °C flow-through experiments on K-feldspar-rich cores from the Eau Claire formation. By characterizing fluid and solid samples from these experiments using a suite of analytical techniques, we explored the coupled evolution of fluid chemistry, mineral reaction rates, and hydrogeochemical properties during CO2 sequestration in feldspar-rich sandstone. Overall, our results confirm predictions that the heightened acidity resulting from supercritical CO2 injection into feldspar-rich sandstone will dissolve primary feldspars and precipitate secondary aluminum minerals. A core through which CO2-rich deionized water was recycled for 52 days decreased in bulk permeability, exhibited generally low porosity associated with high surface area in post-experiment core sub-samples, and produced an Al hydroxide secondary mineral, such as boehmite. However, two samples subjected to ?3 day single-pass experiments run with CO2-rich, 0.94 mol/kg NaCl brines decreased in bulk permeability, showed generally elevated porosity associated with elevated surface area in post-experiment core sub-samples, and produced a phase with kaolinite-like stoichiometry. CO2-induced metal mobilization during the experiments was relatively minor and likely related to Ca mineral dissolution. Based on the relatively rapid approach to equilibrium, the relatively slow near-equilibrium reaction rates, and the minor magnitudes of permeability changes in these experiments, we conclude that CCUS systems with projected lifetimes of several decades are geochemically feasible in the feldspar-rich sandstone end-member examined here. Additionally, the observation that K-feldspar dissolution rates calculated from our whole-rock experiments are in good agreement with literature parameterizations suggests that the latter can be utilized to model CCUS in K-feldspar-rich sandstone. Finally, by performing a number of reactive transport modeling experiments to explore processes occurring during the flow-through experiments, we have found that the overall progress of feldspar hydrolysis is negligibly affected by quartz dissolution, but significantly impacted by the rates of secondary mineral precipitation and their effect on feldspar saturation state. The observations produced here are critical to the development of models of CCUS operations, yet more work, particularly in the quantification of coupled dissolution and precipitation processes, will be required in order to produce models that can accurately predict the behavior of these systems.
Tutolo, B.M., X.-Z. Kong, W.E. Seyfried Jr., and M.O. Saar, High performance reactive transport simulations examining the effects of thermal, hydraulic, and chemical (THC) gradients on fluid injectivity at carbonate CCUS reservoir scales, International Journal of Greenhouse Gas Control, 39, pp. 285-301, 2015. [Download PDF] [View Abstract]Carbonate minerals and CO2 are both considerably more soluble at low temperatures than they are at elevated temperatures. This inverse solubility has led a number of researchers to hypothesize that injecting low-temperature (i.e., less than the background reservoir temperature) CO2 into deep, saline reservoirs for CO2 Capture, Utilization, and Storage (CCUS) will dissolve CO2 and carbonate minerals near the injection well and subsequently exsolve and re-precipitate these phases as the fluids flow into the geothermally warm portion of the reservoir. In this study, we utilize high performance computing to examine the coupled effects of cool CO2 injection and background hydraulic head gradients on reservoir-scale mineral volume changes. We employ the fully coupled reactive transport simulator PFLOTRAN with calculations distributed over up to 800 processors to test 21 scenarios designed to represent a range of reservoir depths, hydraulic head gradients, and CO2 injection rates and temperatures. In the default simulations, 50 °C CO2 is injected at a rate of 50 kg/s into a 200 bar, 100 °C calcite or dolomite reservoir. By comparing these simulations with others run at varying conditions, we show that the effect of cool CO2 injection on reservoir-scale mineral volume changes tends to be relatively minor. We conclude that the low heat capacity of CO2 effectively prevents low-temperature CO2 injection from decreasing the temperature across large portions of the simulated carbonate reservoirs. This small thermal perturbation, combined with the low relative permeability of brine within the supercritical CO2 plume, yields limited dissolution and precipitation effects directly attributable to cool CO2 injection. Finally, we calculate that relatively high water-to-rock ratios, which may occur over much longer CCUS reservoir lifetimes or in materials with sufficiently high brine relative permeability within the supercritical CO2 plume, would be required to substantially affect injectivity through thermally-induced mineral dissolution and precipitation. Importantly, this study shows the utility of reservoir scale-reactive transport simulators for testing hypotheses and placing laboratory-scale observations into a CCUS reservoir-scale context.
Buscheck, T.A., J.M. Bielicki, M. Chen, Y. Sun, Y. Hao, T.A. Edmunds, M.O. Saar, and J.B. Randolph, Integrating CO2 Storage with Geothermal Resources for Dispatchable Renewable Electricity, Energy procedia, 63, pp. 7619-7630, 2014. [Download PDF] [View Abstract]We present an approach that uses the huge fluid and thermal storage capacity of the subsurface, together with geologic CO2 storage, to harvest, store, and dispatch energy from subsurface (geothermal) and surface (solar, nuclear, fossil) thermal resources, as well as energy from electrical grids. Captured CO2 is injected into saline aquifers to store pressure, generate artesian flow of brine, and provide an additional working fluid for efficient heat extraction and power conversion. Concentric rings of injection and production wells are used to create a hydraulic divide to store pressure, CO2, and thermal energy. Such storage can take excess power from the grid and excess/waste thermal energy, and dispatch that energy when it is demanded, enabling increased penetration of variable renewables. Stored CO2 functions as a cushion gas to provide enormous pressure-storage capacity and displaces large quantities of brine, which can be desalinated and/or treated for a variety of beneficial uses. Geothermal power and energy-storage applications may generate enough revenues to justify CO2 capture costs.
Garapati, N., J.B. Randolph, J.L. Valencia Jr., and M.O. Saar, CO2-Plume Geothermal (CPG) Heat Extraction in Multi-layered Geologic Reservoirs, Energy Procedia, 63, pp. 7631-7643, 2014. [Download PDF] [View Abstract]CO2-Plume Geothermal (CPG) technology involves injecting CO2 into natural, highly permeable geologic units to extract energy. The subsurface CO2 absorbs heat from the reservoir, buoyantly rises to the surface, and drives a power generation system. The CO2 is then cooled and reinjected underground. Here, we analyze the effects of multi-layered geologic reservoirs on CPG system performance by examining the CO2 mass fraction in the produced fluid, pore-fluid pressure buildup during operation, and heat energy extraction rates. The produced CO2 mass fraction depends on the stratigraphic positions of highly permeable layers which also affect the pore-fluid pressure drop across the reservoir.
Adams, B.M., T.H. Kuehn, J.M. Bielicki, J.B. Randolph, and M.O. Saar, On the importance of the thermosiphon effect in CPG (CO2-Plume geothermal) power systems, Energy, 69, pp. 409-418, 2014. [Download PDF] [View Abstract]CPG (CO2 Plume Geothermal) energy systems use CO2 to extract thermal energy from naturally permeable geologic formations at depth. CO2 has advantages over brine: high mobility, low solubility of amorphous silica, and higher density sensitivity to temperature. The density of CO2 changes substantially between geothermal reservoir and surface plant, resulting in a buoyancy-driven convective current – a thermosiphon – that reduces or eliminates pumping requirements. We estimated and compared the strength of this thermosiphon for CO2 and for 20 weight percent NaCl brine for reservoir depths up to 5 km and geothermal gradients of 20, 35, and 50 °C/km. We found that through the reservoir, CO2 has a pressure drop approximately 3–12 times less than brine at the same mass flowrate, making the CO2 thermosiphon sufficient to produce power using reservoirs as shallow as 0.5 km. At 2.5 km depth with a 35 °C/km gradient – the approximate western U.S. continental mean – the CO2 thermosiphon converted approximately 10% of the energy extracted from the reservoir to fluid circulation, compared to less than 1% with brine, where additional mechanical pumping is necessary. We found CO2 is a particularly advantageous working fluid at depths between 0.5 km and 3 km.
Tutolo, B.M., X.-Z. Kong, W.E. Seyfried Jr., and M.O. Saar, Internal consistency in aqueous geochemical data revisited: Applications to the aluminum system, Geochimica et Cosmochimica Acta, 133, pp. 216-234, 2014. [Download PDF] [View Abstract]Internal consistency of thermodynamic data has long been considered vital for confident calculations of aqueous geochemical processes. However, an internally consistent mineral thermodynamic data set is not necessarily consistent with calculations of aqueous species thermodynamic properties due, potentially, to improper or inconsistent constraints used in the derivation process. In this study, we attempt to accommodate the need for a mineral thermodynamic data set that is internally consistent with respect to aqueous species thermodynamic properties by adapting the least squares optimization methods of Powell and Holland (1985). This adapted method allows for both the derivation of mineral thermodynamic properties from fluid chemistry measurements of solutions in equilibrium with mineral assemblages, as well as estimates of the uncertainty on the derived results. Using a large number of phase equilibria, solubility, and calorimetric measurements, we have developed a thermodynamic data set of 12 key aluminum-bearing mineral phases. These data are derived to be consistent with Na+ and K+ speciation data presented by Shock and Helgeson (1988), H4SiO4(aq) data presented by Stefánsson (2001), and the Al speciation data set presented by Tagirov and Schott (2001). Many of the constraining phase equilibrium measurements are exactly the same as those used to develop other thermodynamic data, yet our derived values tend to be quite different than some of the others' due to our choices of reference data. The differing values of mineral thermodynamic properties have implications for calculations of Al mineral solubilities; specifically, kaolinite solubilities calculated with the developed data set are as much as 6.75 times lower and 73% greater than those calculated with Helgeson et al. (1978) and Holland and Powell (2011) data, respectively. Where possible, calculations and experimental data are compared at low T, and the disagreement between the two sources reiterates the common assertion that low-T measurements of phase equilibria and mineral solubilities in the aluminum system rarely represent equilibrium between water and well-crystallized, aluminum-bearing minerals. As an ancillary benefit of the derived data, we show that it may be combined with high precision measurements of aqueous complex association constants to derive neutral species activity coefficients in supercritical fluids. Although this contribution is specific to the aluminum system, the methods and concepts developed here can help to improve the calculation of water–rock interactions in a broad range of earth systems.
Luhmann, A.J., X.-Z. Kong, B.M. Tutolo, N. Garapati, B.C. Bagley, M.O. Saar, and W.E. Seyfried Jr., Experimental dissolution of dolomite by CO2-charged brine at 100oC and 150 bar: Evolution of porosity, permeability, and reactive surface area, Chemical Geology, 380, pp. 145-160, 2014. [Download PDF] [View Abstract]Hydrothermal flow experiments of single-pass injection of CO2-charged brine were conducted on nine dolomite cores to examine fluid–rock reactions in dolomite reservoirs under geologic carbon sequestration conditions. Post-experimental X-ray computed tomography (XRCT) analysis illustrates a range of dissolution patterns, and significant increases in core bulk permeability were measured as the dolomite dissolved. Outflow fluids were below dolomite saturation, and cation concentrations decreased with time due to reductions in reactive surface area with reaction progress. To determine changes in reactive surface area, we employ a power-law relationship between reactive surface area and porosity (Luquot and Gouze, 2009). The exponent in this relationship is interpreted to be a geometrical parameter that controls the degree of surface area change per change in core porosity. Combined with XRCT reconstructions of dissolution patterns, we demonstrate that this exponent is inversely related to both the flow path diameter and tortuosity of the dissolution channel. Even though XRCT reconstructions illustrate dissolution at selected regions within each core, relatively high Ba and Mn recoveries in fluid samples suggest that dissolution occurred along the core's entire length and width. Analysis of porosity–permeability data indicates an increase in the rate of permeability enhancement per increase in porosity with reaction progress as dissolution channels lengthen along the core. Finally, we incorporate the surface area–porosity model of Luquot and Gouze (2009) with our experimentally fit parameters into TOUGHREACT to simulate experimental observations.
Tutolo, B.M., A.J. Luhmann, X.-Z. Kong, M.O. Saar, and W.E. Seyfried Jr., Experimental observation of permeability changes in dolomite at CO2 sequestration conditions, Environmental Science and Technology, 48/4, pp. 2445-2452, 2014. [Download PDF] [View Abstract]Injection of cool CO2 into geothermally warm carbonate reservoirs for storage or geothermal energy production may lower near-well temperature and lead to mass transfer along flow paths leading away from the well. To investigate this process, a dolomite core was subjected to a 650 h, high pressure, CO2 saturated, flow-through experiment. Permeability increased from 10–15.9 to 10–15.2 m2 over the initial 216 h at 21 °C, decreased to 10–16.2 m2 over 289 h at 50 °C, largely due to thermally driven CO2 exsolution, and reached a final value of 10–16.4 m2 after 145 h at 100 °C due to continued exsolution and the onset of dolomite precipitation. Theoretical calculations show that CO2 exsolution results in a maximum pore space CO2 saturation of 0.5, and steady state relative permeabilities of CO2 and water on the order of 0.0065 and 0.1, respectively. Post-experiment imagery reveals matrix dissolution at low temperatures, and subsequent filling-in of flow passages at elevated temperature. Geochemical calculations indicate that reservoir fluids subjected to a thermal gradient may exsolve and precipitate up to 200 cm3 CO2 and 1.5 cm3 dolomite per kg of water, respectively, resulting in substantial porosity and permeability redistribution.
Kong, X.-Z., and M.O. Saar, Numerical study of the effects of permeability heterogeneity on density-driven convective mixing during CO2 dissolution storage, Int. J. Greenhouse Gas Control, 19, pp. 160-173, 2013. [Download PDF] [View Abstract]Permanence and security of carbon dioxide (CO2) in geologic formations requires dissolution of CO2 into brine, which slightly increases the brine density. Previous studies have shown that this small increase in brine density induces convective currents, which greatly enhances the mixing efficiency and thus CO2 storage capacity and rate in the brine. Density-driven convection, in turn, is known to be largely dominated by permeability heterogeneity. This study explores the relationship between the process of density-driven convection and the permeability heterogeneity of an aquifer during CO2 dissolution storage, using high-resolution numerical simulations. While the porosity is kept constant, the heterogeneity of the aquifer is introduced through a spatially varying permeability field, characterized by the Dykstra-Parsons coefficient and the correlation length. Depending on the concentration profile of dissolved CO2, we classify the convective finger patterns as dispersive, preferential, and unbiased fingering. Our results indicate that the transition between unbiased and both preferential and dispersive fingering is mainly governed by the Dykstra-Parsons coefficient, whereas the transition between preferential and dispersive fingering is controlled by the permeability correlation length. Furthermore, we find that the CO2 dissolution flux at the top boundary will reach a time-independent steady state. Although this flux strongly correlates with permeability distribution, it generally increases with the permeability heterogeneity when the correlation length is less than the system size.
Kong, X.-Z., and M.O. Saar, DBCreate: A SUPCRT92-based program for producing EQ3/6, TOUGHREACT, and GWB thermodynamic databases at user-defined T and P, Computers and Geosciences, 51, pp. 415-417, 2013. [Download PDF] [View Abstract]SUPCRT92 is a widely used software package for calculating the standard thermodynamic properties of minerals, gases, aqueous species, and reactions. However, it is labor-intensive and error-prone to use it directly to produce databases for geochemical modeling programs such as EQ3/6, the Geochemist's Workbench, and TOUGHREACT. DBCreate is a SUPCRT92-based software program written in FORTRAN90/95 and was developed in order to produce the required databases for these programs in a rapid and convenient way. This paper describes the overall structure of the program and provides detailed usage instructions.
Walsh, S.D.C., and M.O. Saar, Developing extensible lattice-Boltzmann simulators for general-purpose graphics-processing units, Communications in Computational Physics, 13/3, pp. 867-879, 2013. [Download PDF] [View Abstract]Lattice-Boltzmann methods are versatile numerical modeling techniques capable of reproducing a wide variety of fluid-mechanical behavior. These methods are well suited to parallel implementation, particularly on the single-instruction multiple data (SIMD) parallel processing environments found in computer graphics processing units (GPUs). Although recent programming tools dramatically improve the ease with which GPUbased applications can be written, the programming environment still lacks the flexibility available to more traditional CPU programs. In particular, it may be difficult to develop modular and extensible programs that require variable on-device functionality with current GPU architectures. This paper describes a process of automatic code generation that overcomes these difficulties for lattice-Boltzmann simulations. It details the development of GPU-based modules for an extensible lattice-Boltzmann simulation package – LBHydra. The performance of the automatically generated code is compared to equivalent purposewritten codes for both single-phase,multiphase, andmulticomponent flows. The flexibility of the new method is demonstrated by simulating a rising, dissolving droplet moving through a porous medium with user generated lattice-Boltzmann models and subroutines.
Gottardi, R., P.-H. Kao, M.O. Saar, and Ch. Teyssier, Effects of permeability fields on fluid, heat, and oxygen isotope transport in extensional detachment systems, Geochemistry, Geophysics, Geosystems, 14/5, pp. 1493-1522, 2013. [Download PDF] [View Abstract][1] Field studies of Cordilleran metamorphic core complexes indicate that meteoric fluids permeated the upper crust down to the detachment shear zone and interacted with highly deformed and recrystallized (mylonitic) rocks. The presence of fluids in the brittle/ductile transition zone is recorded in the oxygen and hydrogen stable isotope compositions of the mylonites and may play an important role in the thermomechanical evolution of the detachment shear zone. Geochemical data show that fluid flow in the brittle upper crust is primarily controlled by the large-scale fault-zone architecture. We conduct continuum-scale (i.e., large-scale, partial bounce-back) lattice-Boltzman fluid, heat, and oxygen isotope transport simulations of an idealized cross section of a metamorphic core complex. The simulations investigate the effects of crust and fault permeability fields as well as buoyancy-driven flow on two-way coupled fluid and heat transfer and resultant exchange of oxygen isotopes between meteoric fluid and rock. Results show that fluid migration to middle to lower crustal levels is fault controlled and depends primarily on the permeability contrast between the fault zone and the crustal rocks. High fault/crust permeability ratios lead to channelized flow in the fault and shear zones, while lower ratios allow leakage of the fluids from the fault into the crust. Buoyancy affects mainly flow patterns (more upward directed) and, to a lesser extent, temperature distributions (disturbance of the geothermal field by ~25°C). Channelized fluid flow in the shear zone leads to strong vertical and horizontal thermal gradients, comparable to field observations. The oxygen isotope results show δ18O depletion concentrated along the fault and shear zones, similar to field data.
Randolph, J.B., M.O. Saar, and J.M. Bielicki, Geothermal energy production at geologic CO2 sequestration sites: Impact of thermal drawdown on reservoir pressure, Energy Procedia, 37, pp. 6625-6635, 2013. [Download PDF] [View Abstract]Recent geotechnical research shows that geothermal heat can be efficiently mined by circulating carbon dioxide through naturally permeable rock formations -- a method called CO2 Plume Geothermal -- the same geologic reservoirs that are suitable for deep saline aquifer CO2 sequestration or enhanced oil recovery. This paper describes the effect of thermal drawdown on reservoir pressure buildup during sequestration operations, revealing that geothermal heat mining can decrease overpressurization by 10% or more.
Luhmann, A.J., X.-Z. Kong, B.M. Tutolo, K. Ding, M.O. Saar, and W.E. Seyfried Jr., Permeability reduction produced by grain reorganization and accumulation of exsolved CO2 during geologic carbon sequestration: A new CO2 trapping mechanism, Environmental Science and Technlogy, 47/1, pp. 242-251, 2013. [Download PDF] [View Abstract]Carbon sequestration experiments were conducted on uncemented sediment and lithified rock from the Eau Claire Formation, which consisted primarily of K-feldspar and quartz. Cores were heated to accentuate reactivity between fluid and mineral grains and to force CO2 exsolution. Measured permeability of one sediment core ultimately reduced by 4 orders of magnitude as it was incrementally heated from 21 to 150 °C. Water-rock interaction produced some alteration, yielding sub-?m clay precipitation on K-feldspar grains in the core's upstream end. Experimental results also revealed abundant newly formed pore space in regions of the core, and in some cases pores that were several times larger than the average grain size of the sediment. These large pores likely formed from elevated localized pressure caused by rapid CO2 exsolution within the core and/or an accumulating CO2 phase capable of pushing out surrounding sediment. CO2 filled the pores and blocked flow pathways. Comparison with a similar experiment using a solid arkose core indicates that CO2 accumulation and grain reorganization mainly contributed to permeability reduction during the heated sediment core experiment. This suggests that CO2 injection into sediments may store more CO2 and cause additional permeability reduction than is possible in lithified rock due to grain reorganization.
Alexander, S.C., and M.O. Saar, Improved characterization of small u for Jacob pumping test analysis methods, Ground Water, 50/2, pp. 256-265, 2012. [Download PDF] [View Abstract]Numerous refinements have been proposed to traditional pumping test analyses, yet many hydrogeologists continue to use the Jacob method due to its simplicity. Recent research favors hydraulic tomography and inverse numerical modeling of pumping test data. However, at sites with few wells, or relatively short screens, the data requirements of these methods may be impractical within physical and fiscal constraints. Alternatively, an improved understanding of the assumptions and limitations of Theis and, due to their widespread usage, Jacob analyses, leads to improved interpretations in data-poor environments. A fundamental requirement of Jacob is a "small" value of u = f(r2/t), with radial distance, r, and pumping time, t. However, selection of a too stringent (i.e., too low) maximum permissible u-value, umax, results in rejection of usable data from wells beyond a maximum radius, rmax. Conversely, data from small radii, less than rmin, where turbulent- and vertical-flow components arise, can result in acceptance of inappropriate data. Usage of drawdown data from wells too close to the pumping well, and exclusion of data from wells deemed too far, can cause unrealistic aquifer transmissivity, permeability, and storativity determinations. Here, data from an extensive well field in a glacial-outwash aquifer in north-central Minnesota, USA, are used to develop a new estimate for umax. Traditionally quoted values for umax range from 0.01 to 0.05. Our proposed value for Jacob distance-drawdown analyses is significantly higher with umax up to 0.2, resulting in larger allowable rmax-values and a higher likelihood of inclusion of additional wells in such pumping test analyses.
Covington, M.D., A.J. Luhmann, C. Wicks, and M.O. Saar, Process length scales and longitudinal damping in karst conduits, Journal of Geophysical Research - Earth Surface, 117, F01025, 2012. [Download PDF] [View Abstract][1] Simple mathematical models often allow an intuitive grasp of the function of physical systems. We develop a mathematical framework to investigate reactive or dissipative transport processes within karst conduits. Specifically, we note that for processes that occur within a characteristic timescale, advection along the conduit produces a characteristic process length scale. We calculate characteristic length scales for the propagation of thermal and electrical conductivity signals along karst conduits. These process lengths provide a quantitative connection between karst conduit geometry and the signals observed at a karst spring. We show that water input from the porous/fractured matrix is also characterized by a length scale and derive an approximation that accounts for the influence of matrix flow on the transmission of signals through the aquifer. The single conduit model is then extended to account for conduits with changing geometries and conduit flow networks, demonstrating how these concepts can be applied in more realistic conduit geometries. We introduce a recharge density function, ϕR, which determines the capability of an aquifer to damp a given signal, and cast previous explanations of spring variability within this framework. Process lengths are a general feature of karst conduits and surface streams, and we conclude with a discussion of other potential applications of this conceptual and mathematical framework.
Covington, M., A.F. Banwell, J. Gulley, and M.O. Saar, Quantifying the effects of glacier conduit geometry and recharge on proglacial hydrograph form, Journal of Hydrology, 414-415, pp. 59-71, 2012. [Download PDF] [View Abstract]The configuration of glacier hydrological systems is often inferred from proxy data, such as hydrographs, that are collected in proglacial streams. Seasonal changes in the peakedness of hydrographs are thought to reflect changes in the configuration of the subglacial drainage system. However, the amount of information that proglacial hydrographs contain about drainage system configurations depends critically on the degree to which the drainage systems modify recharge hydrographs. If the drainage system does not modify recharge hydrographs, then proglacial hydrographs primarily reflect the recharge conditions produced by supraglacial inputs. Here, we develop a theoretical framework to determine the circumstances under which glacier drainage systems can modify recharge hydrographs and the circumstances under which recharge pulses pass through glaciers unchanged. We address the capability of single conduits, simple arborescent conduit networks, and linked cavity systems to modify diurnal recharge pulses. Simulations of discharge through large sets of such systems demonstrate that, unless large reservoirs or significant constrictions are present, the discharge hydrographs of simple glacial conduit systems are nearly identical to their recharge hydrographs. Conduit systems tend not to modify hydrographs because the changes in storage within englacial and subglacial conduit networks on short time scales are typically small compared to their ability to transmit water. This finding suggests that proglacial hydrographs reflect a variety of factors, including surface melt rate, surface water transfer, and subglacial water transfer. In many cases the influence of suglacial processes may be relatively minor. As a result, the evolution of proglacial hydrographs cannot be used unambiguously to infer changes in the structure or efficiency of englacial or subglacial hydrological systems, without accurate knowledge of the nature of the recharge hydrograph driving the flow.
Saar, M.O., Review: Geothermal heat as a tracer of large-scale groundwater flow and as a means to determine permeability fields, special theme issue on Environmental Tracers and Groundwater Flow, editor-invited peer-reviewed contribution, Hydrogeology Journal, 19, pp. 31-52, 2011. [Download PDF] [View Abstract]A review of coupled groundwater and heat transfer theory is followed by an introduction to geothermal measurement techniques. Thereafter, temperature-depth profiles (geotherms) and heat discharge at springs to infer hydraulic parameters and processes are discussed. Several studies included in this review state that minimum permeabilities of approximately 5 × 10−17 < kmin <10−15 m2 are required to observe advective heat transfer and resultant geotherm perturbations. Permeabilities below kmin tend to cause heat-conduction-dominated systems, precluding inversion of temperature fields for groundwater flow patterns and constraint of permeabilities other than being < k < 10−7 m2. Therefore, a wide range of permeabilities can be investigated by analyzing subsurface temperatures or heat discharge at springs. Furthermore, temperature is easy and economical to measure and because thermal material properties vary far less than hydraulic properties, temperature measurements tend to provide better-constrained groundwater flow and permeability estimates. Aside from hydrogeologic insights, constraint of advective/conductive heat transfer can also provide information on magmatic intrusions, metamorphism, ore deposits, climate variability, and geothermal energy.
Randolph, J.B., and M.O. Saar, Coupling carbon dioxide sequestration with geothermal energy capture in naturally permeable, porous geologic formations: Implications for CO2 sequestration, Energy Procedia, 4, pp. 2206-2213, 2011. [Download PDF] [View Abstract]Carbon dioxide (CO2) sequestration in deep saline aquifers and exhausted oil and natural gas fields has been widely considered as a means for reducing CO2 emissions to the atmosphere as a counter-measure to global warming. However, rather than treating CO2 merely as a waste fluid in need of permanent disposal, we propose that it could also be used as a working fluid in geothermal energy capture, as its thermodynamic and fluid mechanical properties suggest it transfers geothermal heat more efficiently than water. Energy production and sales in conjunction with sequestration would improve the economic viability of CO2 sequestration, a critical challenge for large-scale implementation of the technology. In addition, using CO2 as the working fluid in geothermal power systems may permit utilization of lower temperature geologic formations than those that are currently deemed economically viable, leading to more widespread utilization of geothermal energy. Here, we present the results of early-stage calculations demonstrating the geothermal energy capture potential of CO2-based geothermal systems and implications of such energy capture for the economic viability of geologic CO2 sequestration.
Randolph, J.B., and M.O. Saar, Combining geothermal energy capture with geologic carbon dioxide sequestration, Geophysical Research Letters, 38, L10401, 2011. [Download PDF] [View Abstract][1] Geothermal energy offers clean, renewable, reliable electric power with no need for grid-scale energy storage, yet its use has been constrained to the few locations worldwide with naturally high geothermal heat resources and groundwater availability. We present a novel approach with the potential to permit expansion of geothermal energy utilization: heat extraction from naturally porous, permeable formations with CO2 as the injected subsurface working fluid. Fluid-mechanical simulations reveal that the significantly higher mobility of CO2, compared to water, at the temperature/pressure conditions of interest makes CO2 an attractive heat exchange fluid. We show numerically that, compared to conventional water-based and engineered geothermal systems, the proposed approach provides up to factors of 2.9 and 5.0, respectively, higher geothermal heat energy extraction rates. Consequently, more regions worldwide could be economically used for geothermal electricity production. Furthermore, as the injected CO2 is eventually geologically sequestered, such power plants would have negative carbon footprints.
Randolph, J.B., and M.O. Saar, Impact of reservoir permeability on the choice of subsurface geothermal heat exchange fluid: CO2 versus water and native brine, Geothermal Resources Council (GRC) Transactions, 35, pp. 521-526, 2011. [View Abstract]Abstract Geothermal systems utilizing carbon dioxide (CO 2) as the subsurface heat exchange fluid in naturally porous, permeable geologic formations have been shown to provide improved geothermal heat energy extraction, even at low resource temperatures. Such systems, termed CO 2 Plume Geothermal (CPG) systems, have the potential to permit expansion of geothermal energy utilization while supporting rapid implementation through the use of existing technologies. Here, we explore CPG heat extraction as a function of reservoir permeability and in comparison to water and brine geothermal heat extraction. We show that for reservoir permeabilities below 2 x 10 -14 m 2 , CO 2 -based geothermal provides better electric power production efficiency than both water-and brine-based systems. Impact of reservoir permeability on the choice of subsurface geothermal heat exchange fluid: CO2 versus water and native brine (PDF Download Available). Available from: https://www.researchgate.net/publication/266220008_Impact_of_reservoir_permeability_on_the_choice_of_subsurface_geothermal_heat_exchange_fluid_CO2_versus_water_and_native_brine [accessed Jun 7, 2017].
Covington, M.D., A.J. Luhmann, F. Gabrovsek, M.O. Saar, I. Willis, and C.M. Wicks, Mechanisms of heat exchange between water and rock in karst conduits, Water Resources Research, 47, W10514/10, 2011. [Download PDF] [View Abstract][1] Previous studies, motivated by understanding water quality, have explored the mechanisms for heat transport and heat exchange in surface streams. In karst aquifers, temperature signals play an additional important role since they carry information about internal aquifer structures. Models for heat transport in karst conduits have previously been developed; however, these models make different, sometimes contradictory, assumptions. Additionally, previous models of heat transport in karst conduits have not been validated using field data from conduits with known geometries. Here we use analytical solutions of heat transfer to examine the relative importance of heat exchange mechanisms and the validity of the assumptions made by previous models. The relative importance of convection, conduction, and radiation is a function of time. Using a characteristic timescale, we show that models neglecting rock conduction produce spurious results in realistic cases. In contrast to the behavior of surface streams, where conduction is often negligible, conduction through the rock surrounding a conduit determines heat flux at timescales of weeks and longer. In open channel conduits, radiative heat flux can be significant. In contrast, convective heat exchange through the conduit air is often negligible. Using the rules derived from our analytical analysis, we develop a numerical model for heat transport in a karst conduit. Our model compares favorably to thermal responses observed in two different karst settings: a cave stream fed via autogenic recharge during a snowmelt event, and an allogenically recharged cave stream that experiences continuous temperature fluctuations on many timescales.
Davis, M.A., S.D.C. Walsh, and M.O. Saar, Statistically reconstructing continuous isotropic and anisotropic two-phase media while preserving macroscopic material properties, Physical Review E, 83, 026706, 2011. [Download PDF] [View Abstract]We propose a method to generate statistically similar reconstructions of two-phase media. As with previous work, we initially characterize the microstructure of the material using two-point correlation functions (a subset of spatial correlation functions) and then generate numerical reconstructions using a simulated annealing method that preserves the geometric relationships of the material's phase of interest. However, in contrast to earlier contributions that consider reconstructions composed of discrete arrays of pixels or voxels alone, we generate reconstructions based on assemblies of continuous, three-dimensional, interpenetrating objects. The result is a continuum description of the material microstructure (as opposed to a discretized or pixelated description), capable of efficiently representing large disparities in scale. Different reconstruction methods are considered based on distinct combinations of two-point correlation functions of varying degrees of complexity. The quality of the reconstruction methods are evaluated by comparing the total pore fraction, specific surface area of the percolating cluster, pore fraction of the percolating cluster, tortuosity, and permeability of the reconstructions to those of a set of reference assemblies. Elsewhere it has been proposed that two-phase media could be statistically reproduced with only two spatial correlation functions: the two-point probability function (the probability that two points lie within the same phase) and the lineal path function (the probability that a line between two points lies entirely within the same phase). We find that methods employing the two-point probability function and lineal path function are improved if the percolating cluster volume is also considered in the reconstruction. However, to reproduce more complicated geometric assemblies, we find it necessary to employ the two-point probability, two-point cluster, and lineal path function in addition to the percolating cluster volume to produce a generally accurate statistical reconstruction.
Randolph, J.B., and M.O. Saar, Coupling geothermal energy capture with carbon dioxide sequestration in naturally permeable, porous geologic formations: A comparison with enhanced geothermal systems, Geothermal Resources Council (GRC) Transactions, 34, pp. 433-438, 2010. [Download PDF] [View Abstract]Geothermal energy offers clean, consistent, reliable electric power with no need for grid-scale energy storage, unlike wind and solar renewable power alternatives. However, geothermal energy is often underrepresented in renewable energy discussions and has considerable room for growth. New technology and methods will be critical for future investment, and rapid implementation of new techniques will be critical in ensuring geothermal energy plays a significant role in the future energy landscape world - wide. Here, we discuss a novel approach with the potential to permit expansion of geothermal energy utilization while supporting rapid implementation through the use of exist - ing technologies: geothermal heat use in naturally porous, permeable geologic formations with carbon dioxide as the working heat exchange fluid.
Dasgupta, S., M.O. Saar, R.L. Edwards, C.-C. Shen, H. Cheng, and C.E. Alexander Jr., Three thousand years of extreme rainfall events recorded in stalagmites from Spring Valley Caverns, Minnesota, Earth and Planetary Science Letters, 300, pp. 46-54, 2010. [Download PDF] [View Abstract]Annual layer analysis in two stalagmites collected from Spring Valley Caverns, southeastern Minnesota, reveals hydrological response of the cave to extreme rainfall events in the Midwest, USA. Cave-flooding events are identified within the two samples by the presence of detrital layers composed of clay sized particles. Comparison with instrumental records of precipitation demonstrates a strong correlation between these cave-flood events and extreme rainfall observed in the Upper Mississippi Valley. A simple model is developed to assess the nature of rainfall capable of flooding the cave. The model is first calibrated to the last 50-yr (1950–1998 A.D.) instrumental record of daily precipitation data for the town of Spring Valley and verified with the first 50 yr of record from 1900 to 1949 A.D. Frequency analysis shows that these extreme flood events have increased from the last half of the nineteenth century. Comparison with other paleohydrological records shows increased occurrence of extreme rain events during periods of higher moisture availability. Our study implies that increased moisture availability in the Midwestern region, due to rise in temperature from global warming could lead to an increase in the occurrence of extreme rainfall events.
Walsh, S.D.C., and M.O. Saar, Macroscale lattice-Boltzmann methods for low-Peclet-number solute and heat transport in heterogeneous porous media, Water Resources Research, 46, W07517, 2010. [Download PDF] [View Abstract][1] This paper introduces new methods for simulating subsurface solute and heat transport in heterogeneous media using large-scale lattice-Boltzmann models capable of representing both macroscopically averaged porous media and open channel flows. Previous examples of macroscopically averaged lattice-Boltzmann models for solute and heat transport are only applicable to homogeneous media. Here, we extend these models to properly account for heterogeneous pore-space distributions. For simplicity, in the majority of this paper we assume low Peclet number flows with an isotropic dispersion tensor. Nevertheless, this approach may also be extended to include anisotropic-dispersion by using multiple relaxation time lattice-Boltzmann methods. We describe two methods for introducing heterogeneity into macroscopically averaged lattice-Boltzmann models. The first model delivers the desired behavior by introducing an additional time-derivative term to the collision rule; the second model by separately weighting symmetric and anti-symmetric components of the fluid packet densities. Chapman-Enskog expansions are conducted on the governing equations of the two models, demonstrating that the correct constitutive behavior is obtained in both cases. In addition, methods for improving model stability at low porosities are also discussed: (1) an implicit formulation of the model; and (2) a local transformation that normalizes the lattice-Boltzmann model by the local porosity. The model performances are evaluated through comparisons of simulated results with analytical solutions for one- and two-dimensional flows, and by comparing model predictions to finite element simulations of advection isotropic-dispersion in heterogeneous porous media. We conclude by presenting an example application, demonstrating the ability of the new models to couple with simulations of reactive flow and changing flow geometry: a simulation of groundwater flow through a carbonate system.
Walsh, S.D.C., and M.O. Saar, Interpolated lattice-Boltzmann boundary conditions for surface reaction kinetics, Physical Review E, 82, 066703, 2010. [Download PDF] [View Abstract]This paper describes a method for implementing surface reaction kinetics in lattice Boltzmann simulations. The interpolated boundary conditions are capable of simulating surface reactions and dissolution at both stationary and moving solid-fluid and fluid-fluid interfaces. Results obtained with the boundary conditions are compared to analytical solutions for first-order and constant-flux kinetic surface reactions in a one-dimensional half space, as well as to the analytical solution for evaporation from the surface of a cylinder. Excellent agreement between analytical and simulated results is obtained for a wide range of diffusivities, lattice velocities, and surface reaction rates. The boundary model's ability to represent dissolution in binary fluid mixtures is demonstrated by modeling diffusion from a rising bubble and dissolution of a droplet near a flat plate.
Myre, J., S.D.C. Walsh, D.J. Lilja, and M.O. Saar, Performance analysis of single-phase multiphase, and multicomponent lattice-Boltzmann fluid flow simulations on GPU clusters, Concurrency and Computation: Practice and Experience, 23, pp. 332-350, 2010. [Download PDF] [View Abstract]Abstract The lattice-Boltzmann method is well suited for implementation in single-instruction multiple-data (SIMD) environments provided by general purpose graphics processing units (GPGPUs). This paper discusses the integration of these GPGPU programs with OpenMP to create lattice-Boltzmann applications for multi-GPU clusters. In addition to the standard single-phase single-component lattice-Boltzmann method, the performances of more complex multiphase, multicomponent models are also examined. The contributions of various GPU lattice-Boltzmann parameters to the performance are examined and quantified with a statistical model of the performance using Analysis of Variance (ANOVA). By examining single- and multi-GPU lattice-Boltzmann simulations with ANOVA, we show that all the lattice-Boltzmann simulations primarily depend on effects corresponding to simulation geometry and decomposition, and not on the architectural aspects of GPU. Additionally, using ANOVA we confirm that the metrics of Efficiency and Utilization are not suitable for memory-bandwidth-dependent codes. Copyright © 2010 John Wiley & Sons, Ltd.
Covington, M., C.M. Wicks, and M.O. Saar, A dimensionless number describing the effects of recharge and geometry on discharge from simple karst aquifers, Water Resources Research, 45, W11410, 2009. [Download PDF] [View Abstract][1] The responses of karstic aquifers to storms are often used to obtain information about aquifer geometry. In general, spring hydrographs are a function of both system geometry and recharge. However, the majority of prior work on storm pulses through karst has not studied the effect of recharge on spring hydrographs. To examine the relative importance of geometry and recharge, we break karstic aquifers into elements according to the manner of their response to transient flow and demonstrate that each element has a characteristic response timescale. These fundamental elements are full pipes, open channels, reservoir/constrictions, and the porous matrix. Taking the ratio of the element timescale with the recharge timescale produces a dimensionless number, γ, that is used to characterize aquifer response to a storm event. Using sets of simulations run with randomly selected element parameters, we demonstrate that each element type has a critical value of γ below which the shape of the spring hydrograph is dominated by the shape of the recharge hydrograph and above which the spring hydrograph is significantly modified by the system geometry. This allows separation of particular element/storm pairs into recharge-dominated and geometry-dominated regimes. While most real karstic aquifers are complex combinations of these elements, we draw examples from several karst systems that can be represented by single elements. These examples demonstrate that for real karstic aquifers full pipe and open channel elements are generally in the recharge-dominated regime, whereas reservoir/constriction elements can fall in either the recharge- or geometry-dominated regimes.
Walsh, S.D.C., H. Burwinkle, and M.O. Saar, A new partial-bounceback lattice-Boltzmann method for fluid flow through heterogeneous media, Computers and Geosciences, 35/6, pp. 1186-1193, 2009. [Download PDF] [View Abstract]Partial-bounceback lattice-Boltzmann methods employ a probabilistic meso-scale model that varies individual lattice node properties to reflect a material's local permeability. These types of models have great potential in a range of geofluid, and other science and engineering, simulations of complex fluid flow. However, there are several different possible approaches for formulating partial-bounceback algorithms. This paper introduces a new partial-bounceback algorithm and compares it to two pre-existing partial-bounceback models. Unlike the two other partial-bounceback methods, the new approach conserves mass in heterogeneous media and shows improvements in simulating buoyancy-driven flow as well as diffusive processes. Further, the new model is better-suited for parallel processing implementations, resulting in faster simulations. Finally, we derive an analytical expression for calculating the permeability in all three models; a critical component for accurately matching simulation parameters to physical permeabilities.
Walsh, S.D.C., M.O. Saar, P. Bailey, and D.J. Lilja, Accelerating geoscience and engineering system simulations on graphics hardware, Computers and Geosciences, 35/12, pp. 2353-2364, 2009. [Download PDF] [View Abstract]Many complex natural systems studied in the geosciences are characterized by simple local-scale interactions that result in complex emergent behavior. Simulations of these systems, often implemented in parallel using standard central processing unit (CPU) clusters, may be better suited to parallel processing environments with large numbers of simple processors. Such an environment is found in graphics processing units (GPUs) on graphics cards. This paper discusses GPU implementations of three example applications from computational fluid dynamics, seismic wave propagation, and rock magnetism. These candidate applications involve important numerical modeling techniques, widely employed in physical system simulations, that are themselves examples of distinct computing classes identified as fundamental to scientific and engineering computing. The presented numerical methods (and respective computing classes they belong to) are: (1) a lattice-Boltzmann code for geofluid dynamics (structured grid class); (2) a spectral-finite-element code for seismic wave propagation simulations (sparse linear algebra class); and (3) a least-squares minimization code for interpreting magnetic force microscopy data (dense linear algebra class). Significant performance increases (between 10×× and 30×× in most cases) are seen in all three applications, demonstrating the power of GPU implementations for these types of simulations and, more generally, their associated computing classes.
Walsh, S.D.C., and M.O. Saar, Numerical Models of Stiffness and Yield Stress Growth in Crystal-Melt Suspensions, Earth and Planetary Science Letters, 267/1-2, pp. 32-44, 2008. [Download PDF] [View Abstract]Magmas and other suspensions that develop sample-spanning crystal networks undergo a change in rheology from Newtonian to Bingham flow due to the onset of a finite yield stress in the crystal network. Although percolation theory provides a prediction of the crystal volume fraction at which this transition occurs, the manner in which yield stress grows with increasing crystal number densities is less-well understood. This paper discusses a simple numerical approach that models yield stress in magmatic crystalline assemblies. In this approach, the crystal network is represented by an assembly of soft-core interpenetrating cuboid (rectangular prism) particles, whose mechanical properties are simulated in a network model. The model is used to investigate the influence of particle shape and alignment anisotropy on the yield stress of crystal networks with particle volume fractions above the percolation threshold. In keeping with previous studies, the simulation predicts a local minimum in the onset of yield stress for assemblies of cubic particles, compared to those with more anisotropic shapes. The new model also predicts the growth of yield stress above (and close to) the percolation threshold. The predictions of the model are compared with results obtained from a critical path analysis. Good agreement is found between a characteristic stiffness obtained from critical path analysis, the growth in assembly stiffness predicted by the model (both of which have approximately cubic power-law exponents) and, to a lesser extent, the growth in yield stress (with a power-law exponent of 3.5). The effect of preferred particle alignment on yield stress is also investigated and found to obey similar power-law behavior.
Walsh, S.D.C., and M.O. Saar, Magma yield stress and permeability: Insights from multiphase percolation theory, Journal of Volcanology and Geothermal Research, 177, pp. 1011-1019, 2008. [Download PDF] [View Abstract]Magmas often contain multiple interacting phases of embedded solid and gas inclusions. Multiphase percolation theory provides a means of modeling assemblies of these different classes of magmatic inclusions in a simple, yet powerful way. Like its single phase counterpart, multiphase percolation theory describes the connectivity of discrete inclusion assemblies as a function of phase topology. In addition, multiphase percolation employs basic laws to distinguish separate classes of objects and is characterized by its dependency on the order in which the different phases appear. This paper examines two applications of multiphase percolation theory: the fi rst considers how the presence of bubble inclusions in fl uences yield stress onset and growth in a magma's crystal network; the second examines the effect of bi-modal bubble-size distributions on magma permeability. We fi nd that the presence of bubbles induces crystal clustering, thereby 1) reducing the percolation threshold, or critical crystal volume fraction, φ c , at which the crystals form a space-spanning network providing a minimum yield stress, and 2) resulting in a larger yield stress for a given crystal volume fraction above φ c . This increase in the yield stress of the crystal network may also occur when crystal clusters areformed due toprocesses otherthanbubble formation, such as heterogeneouscrystallization, synneusis, and heterogeneity due to deformation or fl ow. Further, we fi nd that bimodal bubble size distributions can signi fi cantly affect the permeability of the system beyond the percolation threshold. This study thus demonstrates that larger-scale structures and topologies, as well as the order in which different phases appear, can have signi fi cant effects on macroscopic properties in multiphase materials.
Edwards, R.A., B. Rodriguez-Brito, L. Wegley, M. Haynes, M. Breitbart, D.M. Petersen, M.O. Saar, S.C. Alexander, E.C. Alexander Jr., and F. Rohwer, Using pyrosequencing to shed light on deep mine microbial ecology, BMC Genomics, 2006. [Download PDF] [View Abstract] Background Contrasting biological, chemical and hydrogeological analyses highlights the fundamental processes that shape different environments. Generating and interpreting the biological sequence data was a costly and time-consuming process in defining an environment. Here we have used pyrosequencing, a rapid and relatively inexpensive sequencing technology, to generate environmental genome sequences from two sites in the Soudan Mine, Minnesota, USA. These sites were adjacent to each other, but differed significantly in chemistry and hydrogeology. Results Comparisons of the microbes and the subsystems identified in the two samples highlighted important differences in metabolic potential in each environment. The microbes were performing distinct biochemistry on the available substrates, and subsystems such as carbon utilization, iron acquisition mechanisms, nitrogen assimilation, and respiratory pathways separated the two communities. Although the correlation between much of the microbial metabolism occurring and the geochemical conditions from which the samples were isolated could be explained, the reason for the presence of many pathways in these environments remains to be determined. Despite being physically close, these two communities were markedly different from each other. In addition, the communities were also completely different from other microbial communities sequenced to date. Conclusion We anticipate that pyrosequencing will be widely used to sequence environmental samples because of the speed, cost, and technical advantages. Furthermore, subsystem comparisons rapidly identify the important metabolisms employed by the microbes in different environments.
Saar, M.O., M.C. Castro, C.M. Hall, M. Manga, and T.P. Rose, Quantifying magmatic, crustal, and atmospheric helium contributions to volcanic aquifers using all stable noble gases: Implications for magmatism and groundwater flow, Geochemistry Geophysics Geosystems, 6/3, 2005. [Download PDF] [View Abstract][1] We measure all stable noble gases (He, Ne, Ar, Kr, Xe) in spring waters in the Oregon Cascades volcanic arc and in eastern Oregon, USA. We show that in order to estimate magmatic helium (He) contributions it is critical to simultaneously consider He isotopic ratios, He concentrations, and mixing of He components. Our component mixing analysis requires consideration of all measured noble gases but no other elements and is particularly insightful when strong dilution by air-saturated water has occurred. In addition, this approach can allow distinction between crustal and magmatic He components and facilitates their identification in deep groundwaters that have been diluted by near-surface water. Using this approach, we show that some cold springs on the eastern flanks of the Oregon Cascades exhibit He isotopic ratios that indicate significant magmatic He contributions comparable to those observed in thermal springs on the western flanks. Furthermore, while these magmatic He contributions are largest in deep groundwaters near the Cascades crest, greater magmatic excess He fractions than may be inferred from He isotopic ratios alone are present in all (deep) groundwaters including those at larger distances (>70 km) from the volcanic arc. We also suggest that excess He and heat discharge without dilution by air-saturated water may be restricted to spring discharge along faults.
Christiansen, L.B., S. Hurwitz, M.O. Saar, S.E. Ingebritsen, and P.A. Hsieh, Seasonal seismicity at western United States volcanic centers, Earth and Planetary Science Letters, 240, pp. 307-321, 2005. [Download PDF] [View Abstract]We examine 20-yr data sets of seismic activity from 10 volcanic areas in the western United States for annual periodic signals (seasonality), focusing on large calderas (Long Valley caldera and Yellowstone) and stratovolcanoes (Cascade Range). We apply several statistical methods to test for seasonality in the seismic catalogs. In 4 of the 10 regions, statistically significant seasonal modulation of seismicity (> 90% probability) occurs, such that there is an increase in the monthly seismicity during a given portion of the year. In five regions, seasonal seismicity is significant in the upper 3 km of the crust. Peak seismicity occurs in the summer and autumn in Mt. St. Helens, Hebgen Lake/Madison Valley, Yellowstone Lake, and Mammoth Mountain. In the eastern south moat of Long Valley caldera (LVC) peak seismicity occurs in the winter and spring. We quantify the possible external forcing mechanisms that could modulate seasonal seismicity. Both snow unloading and groundwater recharge can generate large stress changes of > 5 kPa at seismogenic depths and may thus contribute to seasonality.
Jellinek, M.A., M. Manga, and M.O. Saar, Did melting glaciers cause volcanic eruptions in eastern California? Probing the mechanics of dike formation, Journal of Geophysical Research, 109/B9, B09206, 2004. [Download PDF] [View Abstract][1] A comparison of time series of basaltic and silicic eruptions in eastern California over the last 400 kyr with the contemporaneous global record of glaciation suggests that this volcanism is influenced by the growth and retreat of glaciers occurring over periods of about 40 kyr. Statistically significant cross correlations between changes in eruption frequency and the first derivative of the glacial time series imply that the temporal pattern of volcanism is influenced by the rate of change in ice volume. Moreover, calculated time lags for the effects of glacial unloading on silicic and basaltic volcanism are distinct and are 3.2 ± 4.2 kyr and 11.2 ± 2.3 kyr, respectively. A theoretical model is developed to investigate whether the increases in eruption frequency following periods of glacial unloading are a response ultimately controlled by the dynamics of dike formation. Applying results from the time series analysis leads, in turn, to estimates for the critical magma chamber overpressure required for eruption as well as constraints on the effective viscosity of the wall rocks governing dike propagation.
Saar, M.O., and M. Manga, Depth dependence of permeability in the Oregon Cascades inferred from hydrogeologic, thermal, seismic, and magmatic modeling constraints, Journal of Geophysical Research, 109/B4, B04204, 2004. [Download PDF] [View Abstract][1] We investigate the decrease in permeability, k, with depth, z, in the Oregon Cascades employing four different methods. Each method provides insight into the average permeability applicable to a different depth scale. Spring discharge models are used to infer shallow (z < 0.1 km) horizontal permeabilities. Coupled heat and groundwater flow simulations provide horizontal and vertical k for z < 1 km. Statistical investigations of the occurrences of earthquakes that are probably triggered by seasonal groundwater recharge yield vertical k for z < 5 km. Finally, considerations of magma intrusion rates and water devolatilization provide estimates of vertical k for z < 15 km. For depths >0.8 km, our results agree with the power law relationship, k = 10−14 m2 (z/1 km)−3.2, suggested by Manning and Ingebritsen [1999] for continental crust in general. However, for shallower depths (typically z ≤ 0.8 km and up to z ≤ 2) we propose an exponential relationship, k = 5 × 10−13 m2 exp (−z/0.25 km), that both fits data better (at least for the Cascades and seemingly for continental crust in general) and allows for a finite near-surface permeability and no singularity at zero depth. In addition, the suggested functions yield a smooth transition at z = 0.8 km, where their permeabilities and their gradients are similar. Permeabilities inferred from the hydroseismicity model at Mount Hood are about one order of magnitude larger than expected from the above power law. However, higher permeabilities in this region may be consistent with advective heat transfer along active faults, causing observed hot springs. Our simulations suggest groundwater recharge rates of 0.5 ≤ uR ≤ 1 m/yr and a mean background heat flow of Hb ≈ 0.080–0.134 W/m2 for the investigated region.
Saar, M.O., and M. Manga, Seismicity induced by seasonal groundwater recharge at Mt. Hood, Oregon, Earth and Planetary Science Letters, 214, pp. 605-618, 2003. [Download PDF] [View Abstract]Groundwater recharge at Mt. Hood, Oregon, is dominated by spring snow melt which provides a natural large- amplitude and narrow-width pore-fluid pressure signal. Time delays between this seasonal groundwater recharge and seismicity triggered by groundwater recharge can thus be used to estimate large-scale hydraulic diffusivities and the state of stress in the crust. We approximate seasonal variations in groundwater recharge with discharge in runoff- dominated streams at high elevations. We interpolate the time series ofnumber ofearthquakes, N , seismic moment, M o , and stream discharge, Q , and determine cross-correlation coefficients at equivalent frequency bands between Q and both N and M o . We find statistically significant correlation coefficients at a mean time lag of about 151 days. This time lag and a mean earthquake depth of about 4.5 km are used in the solution to the pressure diffusion equation, under periodic (1 year) boundary conditions, to estimate a hydraulic diffusivity of U W 10 3 1 m 2 /s, a hydraulic conductivity ofabout K h W 10 3 7 m/s, and a permeability ofabout k W 10 3 15 m 2 . Periodic boundary conditions also allow us to determine a critical pore-fluid pressure fraction, P P / P 0 W 0.1, ofthe applied near-surface pore-fluid pressure perturbation, P 0 W 0.1 MPa, that has to be reached at the mean earthquake depth to cause hydroseismicity. The low magnitude of P P W 0.01 MPa is consistent with other studies that propose 0.01 9 P P 9 0.1 MPa and suggests that the state of stress in the crust near Mt. Hood could be near critical for failure. Therefore, we conclude that, while earthquakes occur throughout the year at Mt. Hood, elevated seismicity levels along pre-existing faults south of Mt. Hood during summer months are hydrologically induced by a reduction in effective stress
Saar, M.O., and M. Manga, Continuum percolation for randomly oriented soft-core prisms, Physical Review E, 65/056131, 2002. [Download PDF] [View Abstract]We study continuum percolation of three-dimensional randomly oriented soft-core polyhedra (prisms). The prisms are biaxial or triaxial and range in aspect ratio over six orders of magnitude. Results for prisms are compared with studies for ellipsoids, rods, ellipses, and polygons and differences are explained using the concept of the average excluded volume, ⟨vex⟩. For large-shape anisotropies we find close agreement between prisms and most of the above-mentioned shapes for the critical total average excluded volume, nc⟨vex⟩, where nc is the critical number density of objects at the percolation threshold. In the extreme oblate and prolate limits simulations yield nc⟨vex⟩≈2.3 and nc⟨vex⟩≈1.3, respectively. Cubes exhibit the lowest-shape anisotropy of prisms minimizing the importance of randomness in orientation. As a result, the maximum prism value, nc⟨vex⟩≈2.79, is reached for cubes, a value close to nc⟨vex⟩=2.8 for the most equant shape, a sphere. Similarly, cubes yield a maximum critical object volume fraction of φc=0.22. φc decreases for more prolate and oblate prisms and reaches a linear relationship with respect to aspect ratio for aspect ratios greater than about 50. Curves of φc as a function of aspect ratio for prisms and ellipsoids are offset at low-shape anisotropies but converge in the extreme oblate and prolate limits. The offset appears to be a function of the ratio of the normalized average excluded volume for ellipsoids over that for prisms, R=⟨¯vex⟩e/⟨¯vex⟩p. This ratio is at its minimum of R=0.758 for spheres and cubes, where φc(sphere)=0.2896 may be related to φc(cube)=0.22 by φc(cube)=1−[1−φc(sphere)]R=0.23. With respect to biaxial prisms, triaxial prisms show increased normalized average excluded volumes, ⟨¯vex⟩, due to increased shape anisotropies, resulting in reduced values of φc. We confirm that Bc=nc⟨vex⟩=2Cc applies to prisms, where Bc and Cc are the average number of bonds per object and average number of connections per object, respectively.
Saar, M.O., M. Manga, K.V. Cashman, and S. Fremouw, Numerical models of the onset of yield strength in crystal-melt suspensions, Earth and Planetary Science Letters, 187, pp. 367-379, 2001. [Download PDF] [View Abstract]The formation of a continuous crystal network in magmas and lavas can provide finite yield strength, τy, and can thus cause a change from Newtonian to Bingham rheology. The rheology of crystal–melt suspensions affects geological processes, such as ascent of magma through volcanic conduits, flow of lava across the Earth's surface, melt extraction from crystal mushes under compression, convection in magmatic bodies, and shear wave propagation through partial melting zones. Here, three-dimensional numerical models are used to investigate the onset of 'static' yield strength in a zero-shear environment. Crystals are positioned randomly in space and can be approximated as convex polyhedra of any shape, size and orientation. We determine the critical crystal volume fraction, φc, at which a crystal network first forms. The value of φc is a function of object shape and orientation distribution, and decreases with increasing randomness in object orientation and increasing shape anisotropy. For example, while parallel-aligned convex objects yield φc=0.29, randomly oriented cubes exhibit a maximum φc of 0.22. Approximations of plagioclase crystals as randomly oriented elongated and flattened prisms (tablets) with aspect ratios between 1:4:16 and 1:1:2 yield 0.08<0.20, respectively. The dependence of φc on particle orientation implies that the flow regime and resulting particle ordering may affect the onset of yield strength. φc in zero-shear environments is a lower bound for φc. Finally the average total excluded volume is used, within its limitation of being a 'quasi-invariant', to develop a scaling relation between τy and φ for suspensions of different particle shapes.
Saar, M.O., and M. Manga, Permeability-porosity relationship in vesicular basalts., Geophysical Research Letters, 26/1, pp. 111-114, 1999. [Download PDF] [View Abstract]The permeability κ and porosity ϕ of vesicular basalts are measured. The relationship between κ and ϕ reflects the formation and emplacement of the basalts and can be related to the crystal and vesicle microstructure obtained by image analysis. Standard theoretical models relating κ and ϕ that work well for granular materials are unsuccessful for vesicular rocks due to the fundamental difference in pore structure. Specifically, κ in vesicular rocks is governed by apertures between bubbles. The difference between calculated and measured κ reflects the small size of these apertures with aperture radii typically O(10) times smaller than the mean bubble radii.
[back to Top of Page]
PROCEEDINGS REFEREED
Rangel-Jurado, N., S. Kücük, M. Brehme, R. Lathion, F. Games, and M. Saar, Comparative Analysis on the Techno-Economic Performance of Different Types of Deep Geothermal Systems for Heat Production , European Geothermal Congress 2022, 2022. [View Abstract]Please enter abstract here
Hau, K.P., F. Games, R. Lathion, M. Brehme, and M.O. Saar, On the feasibility of producing geothermal energy at an intended CO2 sequestration field site in Switzerland, European Geothermal Congress 2022, (in press). [View Abstract]The global climate crisis is caused by the increasing concentration of greenhouse gases in the atmosphere. Carbon, Capture, and Storage (CCS) has been identified as a key technology towards reaching a climate-neutral society. So far, however, the widespread, large-scale deployment of CCS has been prevented, among other things, by its uneconomical nature. (Zapantis et al., 2019). To increase the economic efficiency of CCS, the stored CO2 could additionally be used as a circulating fluid for geothermal power production, turning CCS into simultaneous Carbon, Capture, Utilization and Storage (CCUS). The concept of CO2-Plume Geothermal (CPG) for permanently isolating and using CO2 at the same time was first introduced by Randolph and Saar in 2011. So far CPG has not been tested at the field scale. This study aims at demonstrating the feasibility of CPG for a site in Western Switzerland. First, the study conceptually investigates the CPG power capacity at the study site. Next, a conceptual 3D model is created using an interpreted seismic anticline structure in the Triassic sediments of the Swiss Molasse Basin. We conduct multi-phase fluid flow simulations based on the conceptual geologic model to simulate realistic CO2 circulation. Injection and production rates for multiple well configurations are assessed to calculate the expected geothermal energy production. The obtained results will provide an assessment of the general site suitability and storage capacity for long-term CCUS. Also, these results will enable an estimation of the CPG potential and geothermal power output of the site.
Suherlina, L., D. Bruhn, M.O. Saar, Y. Kamah, and M. Brehme, Updated Geological and Structural Conceptual Model in High Temperature Geothermal Field, European Geothermal Congress 2022, 2022.
Kottsova, A., D. Bruhn, M.O. Saar, and M. Brehme, Clogging mechanisms in geothermal operations: theoretical examples and an applied study, European Geothermal Congress 2022, (in press).
Hefny, M., M.B. Setiawan, M. Hammed, C.-Z. Qin, E. Ebigbo, and M.O. Saar, Optimizing fluid(s) circulation in a CO2-based geothermal system for cost-effective electricity generation , European Geothermal Congress 2022, 2022. [Download PDF] [View Abstract]Carbon Capture and permanent geologic Storage (CCS) can be utilized (U) to generate electrical power from low- to medium-enthalpy geothermal systems in so-called CO2-Plume Geothermal (CPG) power plants. The process of electrical power generation entails a closed circulation of the captured CO2 between the deep underground geological formation (where the CO2 is naturally geothermally heated) and the surface power plant (where the CO2 is expanded in a turbine to generate electricity, cooled, compressed, and then combined with the CO2 stream, from a CO2 emitter, before it is reinjected into the subsurface reservoir). In this research, initially a comprehensive techno-economic method (Adams et al., 2021), which coupled the surface power plant and the subsurface reservoirs, supplies the curves for CO2-based geothermal power potential and its Levelized Cost of Electricity (LCOE) as a function of the mass flowrate. This way, the optimal mass flowrate can be determined, which depends on the wellbore configuration and reservoir properties. However, the method does not account for the possibility of unwanted water accumulation in the production wells (liquid loading). In order to account for this in the optimization process, a wellbore-reservoir coupling is necessary. In this research, flow of fluids from the geological formation into the production wellbores has been analysed by optimizing the reservoir modelling. The optimization method has been extended to a set of representative geological realizations (500+). The optimal CO2 mass flowrate provided using genGEO, which maximizes net-electrical power output while minimizing LCOE, can now be related to the risk of liquid loading occurring. Additionally, the resultant reservoir model can forecast the CO2-plume migration, the reservoir pressure streamlines among the wellbores, and the CO2 saturation around the production wellbore(s).
Niederau, J., A. Ebigbo, and M. Saar, Characterization of Subsurface Heat-Transport Processes in the Canton of Aargau Using an Integrative Workflow, Proceedings World Geothermal Congress 2020, (in press). [View Abstract]In a referendum in May 2017, Switzerland decided to phase out nuclear power in favor of further developing renewable energy sources. One of these energy sources is geothermal energy, which, as a base-load technology, fills a niche complementary to solar and wind energy. A known surface-heat-flow anomaly exists in the Canton of Aargau in Northern Switzerland. With measured specific heat-flow values of up to 140 mW m-2, it is an area of interest for deep geothermal energy exploration. In a pilot study, which started in late 2018, we want to characterize the heat-flow distribution in the vicinity of the anomaly in more detail to facilitate future assessment of the geothermal potential of this region. To achieve a complete characterization of the heat-flow values as well as their spatial uncertainty, we develop a workflow comprising: (i) assimilation and homogenization of different types of geologic data, (ii) development of a geological model with focus on heat transport, and (iii) numerical simulations of the dominant heat-transport processes. Due to its nature as a pilot study, the developed workflow needs to be integrative and adaptable. This means that data generated during the course of the project can easily be integrated in the modeling and simulation process, and that the generated workflow should easily be adaptable to other regions for potential future studies. One further goal of this project is that the generated models and simulations provide insights into the nature of the heat-flow anomaly in Northern Switzerland and to test the hypothesis that upward migration of deep geothermal fluids along structural pathways is the origin of this particular heat-flow anomaly.
Rossi, E., B. Adams, D. Vogler, Ph. Rudolf von Rohr, B. Kammermann, and M.O. Saar, Advanced drilling technologies to improve the economics of deep geo-resource utilization, Proceedings of Applied Energy Symposium: MIT A+B, United States, 2020 , 8, pp. 1-6, 2020. [Download PDF] [View Abstract]Access to deep energy resources (geothermal energy, hydrocarbons) from deep reservoirs will play a fundamental role over the next decades. However, drilling of deep wells to extract deep geo-resources is extremely expensive. As a fact, drilling deep wells into hard, crystalline rocks represents a major challenge for conventional rotary drilling systems, featuring high rates of drill bit wear and requiring frequent drill bit replacements, low penetration rates and poor process efficiency. Therefore, with the aim of improving the overall economics to access deep geo-resources in hard rocks, in this work, we focus on two novel drilling methods, namely: the Combined Thermo-Mechanical Drilling (CTMD) and the Plasma-Pulse Geo-Drilling (PPGD) technologies. The goal of this research and development project is the effective reduction of the costs of drilling in general and particularly regarding accessing and using deep geothermal energy, oil or gas resources. In this work, we present these two novel drilling technologies and focus on evaluating the process efficiency and the drilling performance of these methods, compared to conventional rotary drilling.
Birdsell, D., and M. Saar, Modeling Ground Surface Deformation at the Swiss HEATSTORE Underground Thermal Energy Storage Sites, Proceedings World Geothermal Congress, 2020. [Download PDF] [View Abstract]High temperature (>25 °C) aquifer thermal energy storage (HT-ATES) is a promising technology to store waste heat and reduce greenhouse gas emissions by injecting hot water into the subsurface during the summer months and extracting it for district heating in the winter months. Nevertheless, ensuring the long-term technical success of an HT-ATES project is difficult because it involves complex coupling of fluid flow, heat transfer, and geomechanics. For example, ground surface deformation due to thermo- and poro- elastic deformation could cause damage to nearby infrastructure, and it has not been considered very extensively in the literature. The Swiss HEATSTORE consortium is a group of academic and industrial partners that is developing HT-ATES pilot projects in Geneva and Bern, Switzerland. Possible target formations at the Geneva site include: (a) fractured Cretaceous limestone aquifers interbedded within lower-permeability sedimentary rock and (b) Jurassic reef complex(es), also potentially fractured. In this work we offer numerical modeling support for the Geneva site. A site-specific, hydro-mechanical (HM) model is created, which uses input from the energy systems scenarios and 3D static geological modeling performed by other Swiss consortium partners. Results show that a large uplift (> 5 cm) is possible after one loading cycle, but a sensitivity analysis shows that uplift is decreased to ≤ 0.3 cm if the aquifer permeability is increased or an auxiliary well is included to balance inflow and outflow. Future work includes running coupled thermo-hydro-mechanical (THM) models for several loading and unloading cycles. The THM framework can help inform future decisions about the Swiss HT-ATES sites (e.g. the final site selection within the Geneva basin, well spacing, and operating temperature). It can also be applied to understand surface deformation in the context of geothermal energy, carbon sequestration, and at other ATES sites worldwide.
Samrock, F., A.V. Grayver, B. Cherkose, A. Kuvshinov, and M.O. Saar, Aluto-Langano Geothermal Field, Ethiopia: Complete Image Of Underlying Magmatic-Hydrothermal System Revealed By Revised Interpretation Of Magnetotelluric Data, Proceedings World Geothermal Congress 2020, 2020. [Download PDF] [View Abstract]Aluto-Langano in the Main Ethiopian Rift Valley is currently the only producing geothermal field in Ethiopia and probably the best studied prospect in the Ethiopian Rift. Geoscientific exploration began in 1973 and led to the siting of an exploration well LA3 on top of the volcanic complex. The well was drilled in 1983 to a depth of 2144m and encountered temperatures of 320°C. Since 1990 Aluto has produced electricity, albeit with interruptions. Currently it is undergoing a major expansion phase with the plan to generate about 70MWe from eight new wells, until now two of them have been drilled successfully. Geophysical exploration at Aluto involved magnetotelluric (MT) soundings, which helped delineate the clay cap atop of the hydrothermal reservoir. However, until now geophysical studies did not succeed in imaging the proposed magmatic heat source that would drive the observed hydrothermal convection. For this study, we inverted 165 of a total of 208 MT stations that were measured over the entire volcanic complex in three independent surveys by the Geological Survey of Ethiopia and ETH Zurich, Switzerland. For the inversion, we used a novel 3-D inverse solver that employs adaptive finite element techniques, which allowed us to accurately model topography and account for varying lateral and vertical resolution. We inverted MT phase tensors. This transfer function is free of galvanic distortions that have long been recognized as an obstacle in MT inversion. Our recovered model shows, for the first time, the entire magmatic-hydrothermal system under the geothermal field. The up-flow of melt is structurally controlled by extensional rift faults and sourced by a lower crustal basaltic mush reservoir. Productive wells were all drilled into a weak fault zone below the clay cap. The productive reservoir is underlain by an electrically conductive upper-crustal feature, which we interpret as a highly crystalline rhyolitic mush zone, acting as the main heat source. Our results demonstrate the importance of a dense MT site distribution and state-of-the-art inversion tools in order to obtain reliable and complete subsurface models of high enthalpy systems below volcanic geothermal prospects.
Niederau, J., A. Ebigbo, and M. O. Saar, Characterization of Subsurface Heat-Transport Processes in the Canton of Aargau Using an Integrative Workflow, Proceedings World Geothermal Congress 2020, Reykjavik, Iceland, April 26 - May 2, 2020, 2020. [View Abstract]In a referendum in May 2017, Switzerland decided to phase out nuclear power in favor of further developing renewable energy sources. One of these energy sources is geothermal energy, which, as a base-load technology, fills a niche complementary to solar and wind energy. A known surface-heat-flow anomaly exists in the Canton of Aargau in Northern Switzerland. With measured specific heat-flow values of up to 140 mW m-2, it is an area of interest for deep geothermal energy exploration. In a pilot study, which started in late 2018, we want to characterize the heat-flow distribution in the vicinity of the anomaly in more detail to facilitate future assessment of the geothermal potential of this region. To achieve a complete characterization of the heat-flow values as well as their spatial uncertainty, we develop a workflow comprising: (i) assimilation and homogenization of different types of geologic data, (ii) development of a geological model with focus on heat transport, and (iii) numerical simulations of the dominant heat-transport processes. Due to its nature as a pilot study, the developed workflow needs to be integrative and adaptable. This means that data generated during the course of the project can easily be integrated in the modeling and simulation process, and that the generated workflow should easily be adaptable to other regions for potential future studies. One further goal of this project is that the generated models and simulations provide insights into the nature of the heat-flow anomaly in Northern Switzerland and to test the hypothesis that upward migration of deep geothermal fluids along structural pathways is the origin of this particular heat-flow anomaly.
Guglielmetti, L., P. Alt-Epping, D. Birdsell, F. de Oliveira, L. Diamond, T. Driesner, O. Eruteya, P. Hollmuller, et al., and M.O. Saar, HEATSTORE SWITZERLAND: New Opportunities of Geothermal District Heating Network Sustainable Growth by High Temperature Aquifer Thermal Energy Storage Development, World Geothermal Congress, 2020. [View Abstract]HEATSTORE is a GEOTHERMICA ERA-NET co-funded project, aiming at developing High Temperature (~25°C to ~90°C) Underground Thermal Energy Storage (HT-UTES) technologies by lowering the cost, reducing risks, improving the performance, and optimizing the district heating network demand side management at 6 new pilot and demonstration sites, two of which are in Switzerland, plus 8 case studies. The European HEATSTORE consortium includes 24 contributing partners from 9 countries, composing a mix of scientific research institutes and private companies. The Swiss consortium, developing HEATSTORE in Switzerland, involves of two industrial partners (Services Industriels de Geneva - SIG and Energie Wasser Bern - EWB) and four academic partners (Universities of Geneva, Bern, Neuchâtel and ETH Zurich), with support from the Swiss Federal Office of Energy. The aims are to develop two demonstration projects for High Temperature Aquifer Thermal Energy Storage (HT-ATES) in the cantons of Geneva and Bern such that industrial waste heat can be converted into a resource. This paper presents the results of the first year of activities in the Swiss projects. The activities planned cover subsurface characterization, energy system analysis, surface implementation design, legal framework improvement and business modelling to ensure the sustainability of the projects. This approach is supported by large industrial investments for subsurface characterization. Two wells, down to 1200m below surface level (bsl) are being drilled in the Geneva area to tap potential targets in the carbonate Mesozoic units and at least three additional wells, down to 500m bsl will target the Molasse sediments in the Bern area next year. These wells allow subsurface exploration and characterization and will provide data, used for detailed THMC modelling to assess the thermal energy storage potential at the two sites in Switzerland. The results of such numerical modelling are combined with energy system analysis to quantify the waste heat availability and heat demand and hence optimize the production and injection operations. The outcomes of the coupled assessments will aid in designing the integration of the new installations into the district- heating network. Legal framework improvements, based on complete technical evaluation and on the best-practice sharing with the other European partners, will be an enabling tool to accelerate the implementation of the HT-ATES systems, while business modelling helps calibrate the economic feasibility of the projects and helps industrial partners to plan future investments.
Lima, M., P. Schädle, D. Vogler, M. Saar, and X.-Z. Kong, A Numerical Model for Formation Dry-out During CO2 Injection in Fractured Reservoirs Using the MOOSE Framework: Implications for CO2-based Geothermal Energy Extraction, Proceedings of the World Geothermal Congress 2020, Reykjavík, Iceland, (in press). [View Abstract]Injection of supercritical carbon dioxide (scCO2) into geological reservoirs is involved in Carbon Capture, Utilization, and Storage (CCUS), such as geological CO2 storage, and Enhanced Geothermal Systems (EGS). The potential physico-chemical interactions between the dry scCO2, the reservoir fluid, and rocks may cause formation dry-out, where mineral precipitates due to continuous evaporation of water into the scCO2 stream. This salt precipitation may impair the rock bulk permeability and cause a significant decrease in the well injectivity. Formation dry-out and the associated salt precipitation during scCO2 injection into porous media have been investigated in previous studies by means of numerical simulations and laboratory experiments. However, few studies have focused on the dry-out effects in fractured rocks in particular, where the mass transport is strongly influenced by the fracture aperture distribution. In this study, we numerically model the dry-out processes occurring during scCO2 injection into brine-saturated single fractures and evaluate the potential of salt precipitation. Fracture aperture fields are photogrammetrically determined with fracture geometries of naturally fractured granite cores from the Deep Underground Geothermal (DUG) Lab at the Grimsel Test Site (GTS), in Switzerland. We use an open-source, parallel finite element framework to numerically model two-phase flow through a 2D fracture plane. Under in-situ reservoir conditions, the brine is displaced by dry scCO2 and also evaporates into the CO2 stream. The fracture permeability is calculated with the local cubic law. Additionally, we extend the numerical model by the Young-Laplace equation to determine the aperture-based capillary pressure. Finally, as future work, the precipitation of salt will be modelled by employing a uniform mineral growth approach, where the local aperture uniformly decreases with the increase in precipitated mineral volume. The numerical simulations assist in understanding the long-term behaviour of reservoir injectivity during subsurface applications that involve scCO2 injection, including CO2-based geothermal energy extraction.
Hefny, M., C.-Z. Qin, A. Ebigbo, J. Gostick, M.O. Saar, and M. Hammed, CO2-Brine flow in Nubian Sandstone (Egypt): Pore-Network Modeling using Computerized Tomography Imaging, European Geothermal Congress (EGC), 2019. [Download PDF] [View Abstract]The injection of CO2 into the highly permeable Nubian Sandstone of a depleted oil field in the central Gulf of Suez Basin (Egypt) is an effective way to extract enthalpy from deep sedimentary basins while sequestering CO2, forming a so-called CO2-Plume Geothermal (CPG) system. Subsurface flow models require constitutive relationships, including relative permeability and capillary pressure curves, to determine the CO2-plume migration at a representative geological scale. Based on the fluid-displacement mechanisms, quasi-static pore-network modeling has been used to simulate the equilibrium positions of fluid-fluid interfaces, and thus determine the capillary pressure and relative permeability curves. 3D images with a voxel size of 650 nm3 of a Nubian Sandstone rock sample have been obtained using Synchrotron Radiation X-ray Tomographic Microscopy. From the images, topological properties of pores/throats were constructed. Using a pore-network model, we performed a cycle of primary drainage of quasi-static invasion to quantify the saturation of scCO2 at the point of a breakthrough with emphasis on the relative permeability–saturation relationship. We compare the quasi-static flow simulation results from the pore-network model with experimental observations. It shows that the Pc-Sw curve is very similar to those observed experimentally.
Rossi, E., S. Jamali, M.O. Saar, and Ph. Rudolf von Rohr, Laboratory and field investigation of a combined thermo-mechanical technology to enhance deep geothermal drilling, 81st EAGE Conference & Exhibition 2019, Jun 2019, pp. 1-5, 2019. [Download PDF] [View Abstract]The development of deep geothermal systems to boost global electricity production relies on finding cost-effective solutions to enhance the drilling performance in hard rock formations. In this work, we investigate a novel drilling method combining thermal spallation and conventional drilling. This method aims to reduce the rock removal efforts of conventional drilling by thermally assisting the drilling process by flame jets. Laboratory experiments are conducted on the combined drilling concept by studying the effects of flame treatments on the mechanical strength of hard and soft rocks. In addition, investigation on the interaction between the rock and a cutting tool, permits to show that the combined method can drastically improve the drilling performance in terms of rate of penetration, bit wearing and the required mechanical energy to remove the material. As a proof-of-concept of the method, a field demonstration is presented, where the technology is implemented in a conventional drill rig in order to show the process feasibility as well as to quantify its performance under realistic conditions.
Lima, M.M., P. Schädle, D. Vogler, M.O. Saar, and X.-Z. Kong, Impact of Effective Normal Stress on Capillary Pressure in a Single Natural Fracture, European Geothermal Congress 2019, pp. 1-9, 2019. [View Abstract]Multiphase fluid flow through rock fractures occurs in many reservoir applications such as geological CO2 storage, Enhanced Geothermal Systems (EGS), nuclear waste disposal, and oil and gas production. However, constitutional relations of capillary pressure versus fluid saturation, particularly considering the change of fracture aperture distributions under various stress conditions, are poorly understood. In this study, we use fracture geometries of naturally-fractured granodiorite cores as input for numerical simulations of two-phase brine displacement by super critical CO 2 under various effective normal stress conditions. The aperture fields are first mapped via photogrammetry, and the effective normal stresses are applied by means of a Fast Fourier Transform (FFT)-based convolution numerical method. Throughout the simulations, the capillary pressure is evaluated from the local aperture. Two approaches to obtain the capillary pressure are used for comparison: either directly using the Young-Laplace equation, or the van Genuchten equation fitted from capillary pressure-saturation relations generated using the pore-occupancy model. Analyses of the resulting CO2 injection patterns and the breakthrough times enable investigation of the relationships between the effective normal stress, flow channelling and aperture-based capillary pressures. The obtained results assist the evaluation of two-phase flow through fractures in the context of various subsurface applications.
Ma, J., M.O. Saar, and X.-Z. Kong, Estimation of Effective Surface Area: A Study on Dolomite Cement Dissolution in Sandstones, Proceedings World Geothermal Congress 2020, 2019.
von Planta, C., D. Vogler, X. Chen, M.G.C. Nestola, M.O. Saar, and R. Krause, Fluid-structure interaction with a parallel transfer operators to model hydro-mechanical processes in heterogeneous fractures, International Conference on Coupled Processes in Fractured Geological Media (CouFrac 2018), pp. 1-4, 2018. [View Abstract]Contact mechanics and fluid flow in rough fractures are actively researched topics in reservoir engineering (e.g., enhanced geothermal systems, CO2 sequestration and oil- and gas-extraction) to estimate reservoir productivity or leak-off. Mechanical and fluid flow processes in reservoirs are often tightly coupled and exhibit a strongly non-linear behavior. Understanding hydro-mechanically coupled behavior in fractures is complicated further by highly variable fracture geometries [3, 4]. We present a simulation approach for hydro-mechanical processes in rough fracture geometries with variational parallel transfer operators. The contact problem at the boundary between the two rough fracture surfaces is solved using a finite element formulation of linear elasticity on an unstructured mesh. The contact formulation uses a mortar method with Lagrange multipliers and does not use a penalty parameter or other regularizations. For the Navier-Stokes formulation of the fluid we use a finite element formulation on a structured grid. Information between the meshes is transferred via the variational transfer operators, whereby the solid interacts with the fluid by enforcing velocity constraints at the solid-fluid interface and the fluid interacts with the solid by converting the fluid velocity into a pressure force acting on the solid.
Myre, J.M., E. Frahm, D.J. Lilja, and M.O. Saar, Solving Large Dense Least-Squares Problems: Preconditioning to Take Conjugate Gradient From Bad in Theory, to Good in Practice, IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 987-995, 2018. [Download PDF] [View Abstract]Since its inception by Gauss, the least-squares problem has frequently arisen in science, mathematics, and engineering. Iterative methods, such as Conjugate Gradient Normal Residual (CGNR), have been popular for solving sparse least-squares problems, but have historically been regarded as undesirable for dense applications due to poor convergence. We contend that this traditional "common knowledge" should be reexamined. Preconditioned CGNR, and perhaps other iterative methods, should be considered alongside standard methods when addressing large dense least-squares problems. In this paper we present TNT, a dynamite method for solving large dense least-squares problems. TNT implements a Cholesky preconditioner for the CGNR fast iterative method. The Cholesky factorization provides a preconditioner that, in the absence of round-off error, would yield convergence in a single iteration. Through this preconditioner and good parallel scaling, TNT provides improved performance over traditional least-squares solvers allowing for accelerated investigations of scientific and engineering problems. We compare a parallel implementations of TNT to parallel implementations of other conventional methods, including the normal equations and the QR method. For the small systems tested (15000 × 15000 or smaller), it is shown that TNT is capable of producing smaller solution errors and executing up to 16× faster than the other tested methods. We then apply TNT to a representative rock magnetism inversion problem where it yields the best solution accuracy and execution time of all tested methods.
Rossi, E., M.A. Kant, O. Borkeloh, M.O. Saar, and Ph. Rudolf von Rohr, Experiments on Rock-Bit Interaction During a Combined Thermo-Mechanical Drilling Method, 43rd Workshop on Geothermal Reservoir Engineering, SGP-TR-213, 2018. [View Abstract]The development of deep geothermal systems to boost global electricity production relies on finding cost-effective solutions to enhance the drilling performance in hard rock formations. Conventional drilling methods, based on mechanical removal of the rock material, are characterized by high drill bit wear rates and low rates of penetration (ROP) in hard rocks, resulting in high drilling costs, which account for more than 60% of the overall costs for a geothermal project. Therefore, alternative drilling technologies are investigated worldwide with the aim of improving the drilling capabilities and therewith enhancing the exploitation of deep geothermal resources. In this work, a promising drilling method, where conventional rotary drilling is thermally assisted by a flame-jet, is evaluated. Here, the thermal weakening of the rock material, performed by flame-jets, facilitates the subsequent mechanical removal performed by conventional cutters. The flame moves on the rock surface and thermally treats the material by inducing high thermal gradients and high temperatures, therewith reducing the mechanical properties of the rock. This would result in reduced forces on the drill bits, leading to lower bit wear rates and improved rates of penetration and therefore significantly decreasing the drilling costs, especially for deep-drilling projects. In this work, the feasibility of the proposed drilling method is assessed by comparing the rock-bit interaction in sandstone and granite under baseline and thermally treated conditions. Rock abrasivity, tool penetration and cutting forces are investigated to quantify the rock-bit interaction in granite and sandstone under baseline conditions and after the thermal treatment. The results highlights the dominant mechanisms regulating the rock removal. The removal performance of the tool in the granite material are found to be greatly enhanced by the thermal treatment both in terms of volume removed from the sample and worn volume at the tool's tip. On the other hand, the sandstone material, after a thermal treatment, yields significantly lower wearing of the cutting tool. Thus, this results allow to draw important conclusions regarding the achievable drilling performances during the combined thermo-mechanical drilling method towards its application in the field.
Garapati, N., B.M. Adams, J.M. Bielicki, P. Schaedle, J.B. Randolph, T.H. Kuehn, and M.O. Saar, A Hybrid Geothermal Energy Conversion Technology - A Potential Solution for Production of Electricity from Shallow Geothermal Resources, Energy Procedia, 114, pp. 7107-7117, 2017. [Download PDF] [View Abstract]Geothermal energy has been successfully employed in Switzerland for more than a century for direct use but presently there is no electricity being produced from geothermal sources. After the nuclear power plant catastrophe in Fukushima, Japan, the Swiss Federal Assembly decided to gradually phase out the Swiss nuclear energy program. Deep geothermal energy is a potential resource for clean and nearly CO2-free electricity production that can supplant nuclear power in Switzerland and worldwide. Deep geothermal resources often require enhancement of the permeability of hot-dry rock at significant depths (4-6 km), which can induce seismicity. The geothermal power projects in the Cities of Basel and St. Gallen, Switzerland, were suspended due to earthquakes that occurred during hydraulic stimulation and drilling, respectively. Here we present an alternative unconventional geothermal energy utilization approach that uses shallower, lower-temperature, naturally permeable regions, that drastically reduce drilling costs and induced seismicity. This approach uses geothermal heat to supplement a secondary energy source. Thus this hybrid approach may enable utilization of geothermal energy in many regions in Switzerland and elsewhere, that otherwise could not be used for geothermal electricity generation. In this work, we determine the net power output, energy conversion efficiencies, and economics of these hybrid power plants, where the geothermal power plant is actually a CO2-based plant. Parameters varied include geothermal reservoir depth (2.5-4.5 km) and turbine inlet temperature (100-220 °C) after auxiliary heating. We find that hybrid power plants outperform two individual, i.e., stand-alone geothermal and waste-heat power plants, where moderate geothermal energy is available. Furthermore, such hybrid power plants are more economical than separate power plants.
Vogler, D., R.R. Settgast, C.S. Sherman, V.S. Gischig, R. Jalali, J.A. Doetsch, B. Valley, K.F. Evans, F. Amann, and M.O. Saar, Modeling the Hydraulic Fracture Stimulation performed for Reservoir Permeability Enhancement at the Grimsel Test Site, Switzerland, Proc. of the 42nd Workshop on Geothermal Reservoir Engineering, Stanford University Stanford, CA, USA, February 13-15, 2017, Proceedings of the 42nd Workshop on Geothermal Reservoir Engineering Stanford University, 2017. [Download PDF] [View Abstract]In-situ hydraulic fracturing has been performed on the decameter scale in the Deep Underground rock Laboratory (DUG Lab) at the Grimsel Test Site (GTS) in Switzerland in order to measure the minimum principal stress magnitude and orientation. Conducted tests were performed in a number of boreholes, with 3–4 packer intervals in each borehole subjected to repeated injection. During each test, fluid injection pressure, injection flow rate and microseismic events were recorded amongst others. Fully coupled 3D simulations have been performed with the LLNL's GEOS simulation framework. The methods applied in the simulation of the experiments address physical processes such as rock deformation/stress, LEFM fracture mechanics, fluid flow in the fracture and matrix, and the generation of micro-seismic events. This allows to estimate the distance of fracture penetration during the injection phase and correlate the simulated injection pressure with experimental data during injection, as well as post shut-in. Additionally, the extent of the fracture resulting from simulations of fracture propagation and microseismic events are compared with the spatial distribution of the microseismic events recorded in the experiment.
Rossi, E., M. Kant, F. Amann, M.O. Saar, and P. Rudolf von Rohr, The effects of flame-heating on rock strength: Towards a new drilling technology, Proc. of the American Rock Mechanics Association (ARMA) Symposium San Francisco, USA, June 25-28, 2017, Proceedings ARMA 2017, 2017. [View Abstract]The applicability of a combined thermo-mechanical drilling technique is investigated. The working principle of this method is based on the implementation of a heat source as a mean to either provoke thermal spallation on the surface or to weaken the rock material, when spallation is not possible. Thermal spallation drilling has already been proven to work in hard crystalline rocks, however, several difficulties hamper its application for deep resource exploitation. In order to prove the effectiveness of a combined thermo-mechanical drilling method, the forces required to export the treated sandstone material with a polycrystalline diamond compact (PDC) cutter are analyzed. The main differences between oven and flame treatments are studied by comparing the resulting strength after heat-treating the samples up to temperatures of \(650\, ^{\circ}C\) and for heating rates ranging from \(0.17 \,^{\circ}C/s\) to \(20 ^{\circ}C/s\). For moderate temperatures (\(300-450 \,^{\circ}C\)) the unconfined compressive strength after flame treatments monotonously decreased, opposed to the hardening behavior observed after oven treatments. Thermally induced intra-granular cracking and oxidation patterns served as an estimation of the treated depth due to the flame heat treatment. Therefore, conclusions on preferred operating conditions of the drilling system are drawn based on the experimental results.
Garapati, N., J. Randolph, S. Finsterle, and M.O. Saar, Simulating Reinjection of Produced Fluids Into the Reservoir, Proc. of the Stanford Geothermal workshop Stanford, CA, February 2016, Proceedings of 41st Workshop on Geothermal Reservoir Engineering, 2016. [Download PDF] [View Abstract]ABSTRACT In order to maintain reservoir pressure and stability and to reduce reservoir s ubsidence, reinjection of produced fluids into the reservoir is common practice . Furthermore, studies by Karvounis and Jenny (2012 ; 2014), Buscheck et al. (2015), and Saar et al. (2015) found that preheating the working fluid in shallow reservoirs and then injecting the fluid into a deep reservoir can increase the reservoir life span, the heat extraction efficiency, and the economic gains of a geothermal power plant . We have modif ied the TOUGH2 simulator to enable the reinjection of produced fluids with the same chemical composition as the produced fluid and with either a prescribed or the production temperature . T he latter capability is useful, for example, for simulating injecti on of produced fluid into another (e .g., deeper) reservoir without energy extraction. Each component of the fluid mixture , produced from the production well , is reinjected into the reservoir as an individual source term. In the current study, we investigate a CO 2 - based geothermal system and focus on the effects of reinjecting small amounts of brine that are produced along with the CO 2 . Brine has a significantly smaller mobility (inverse kinematic viscosity) than supercritical CO 2 at a given temperature and thus accumulates near the injection well. Such brine accumulation reduces the relative permeability for the CO 2 phase, which in turn increases the pore - fluid pressure around the injection well and reduces the well in j ectivity index. For this reason, and as injection of two fluid phases is pr oblematic, we recommend removal of any brine from the produced fluid before the cooled CO 2 is reinjected into the reservoir. We also study the performance of a multi - level geothermal system (Karvounis and Jenny, 2012; 2014; Saar et al., 2015) by injection of preheated brine from a shallow reservoir (1.5 - 3 km) into a deep reservoir (5 km). We f i nd that preheating brine at the shallow reservoir extends the lifespan of the deep, hot reservoir, thereby increasing the total power production.
Buscheck, T.A., J.M. Bielicki, M. Chen, Y. Sun, Y. Hao, T.A. Edmunds, J.B. Randolph, and M.O. Saar, Multi-Fluid Sedimentary Geothermal Energy Systems for Dispatchable Renewable Electricity, Proc. of the World Geothermal Congress Melbourne, 19-25April, Proceedings to the World Geothermal Congress, 2015. [Download PDF] [View Abstract]Sedimentary geothermal resources typically have lower temperatures and energy densities than hydrothermal resources, but they often have higher permeability and larger areal extents. Consequently, spacing between injection and production wells is likely to be wider in sedimentary resources, which can result in more fluid pressure loss, increa sing the parasitic cost of powering the working fluid recirculation system, compared to hydrothermal systems . For hydrostatic geothermal resources , extracting heat requires that brine be lifted up production wells, such as with submersible pumps, which can consume a large portion of the electricity generated by the power plant. CO 2 is being considered as an alternative working fluid (also termed a supplemental fluid) because its advantageous thermophysical properties reduce this parasitic cost, and because of the synergistic benefit of geologic CO 2 sequestration (GCS). We expand on this idea by: (1) adding the option for multiple supplemental fluids (N 2 as well as CO 2 ) and injecting these fluids to create overpressured reservoir conditions , (2) utiliz ing up to three working fluids: brine, CO 2 , and N 2 for heat extraction, (3) using a well pattern designed to store supplemental fluid and pressure , and (4) time - shifting the parasitic load associated with fluid recirculation to provide ancillary services ( frequen cy regulat ion , load fo llowing , and spinning reserve) and bulk energy storage (BES) . Our approach uses concentric rings of horizontal wells to create a hydraulic divide to store supplemental fluid and pressure, much like a hydroelectric dam. While, as with any geothermal system, electricity production can be run as a base - load power source, p roduction wells can alternatively b e controlled like a spillway to supply power when demand is greatest. For conventional geothermal power, the parasitic power load for fluid recirculation is synchronous with gross power output. In contrast, our approach time - shift s much of this parasitic load, which is dominated by the power required to pressurize and inject brine . Th us, most of the parasitic load can be scheduled durin g minimum power demand or when, due to its inherent var iability, there is a surplus of renewable energy on the grid . Energy storage is almost 100 percent efficient because it is achieved by time - shifting the parasitic load. Consequently, net power can near ly equal gross power during peak demand so that geothermal energy can be used as a form of high - efficiency BES at large scales . A further benefit of our approach is that production rates (per well) can exceed the capacity of submersible pumps and thereby t ake advantage of the productivity of horizontal wells and better leverage we ll costs — which often constitute a major portion of capital costs . Our vision is a n efficient, dispatchable , renewable electricity system approach that facilitates deep market penet ration of all renewable energy sources: wind, solar, and geothermal, whi le utilizing and permanently storing CO 2 in a commercially viable manner
Saar, M.O., Th. Buscheck, P. Jenny, N. Garapati, J.B. Randolph, D. Karvounis, M. Chen, Y. Sun, and J.M. Bielicki, Numerical Study of Multi-Fluid and Multi-Level Geothermal System Performance, Proc. of the World Geothermal Congress 2015 Melbourne, Australia, April 19-25, 2015, Proceedings World Geothermal Congress 2015, 2015. [Download PDF] [View Abstract]We introduce the idea of combining multi-fluid and multi-level geothermal systems with two reservoirs at depths of 3 and 5 km. In the base case, for comparison, the two reservoirs are operated independently, each as a multi-fluid (brine and carbon dioxide) reservoir that uses a number of horizontal, concentric injection and production well rings. When the shallow and the deep reservoirs are operated in an integrated fashion, in the shallow reservoir, power is produced only from the carbon dioxide (CO 2), while the brine is geothermally preheated in the shallow multi-fluid reservoir, produced, and then reinjected at the deeper reservoir's brine injectors. The integrated reservoir scenarios are further subdivided into two cases: In one scenario, both brine (preheated in the shallow reservoir) and CO 2 (from the surface) are injected separately into the deeper reservoir's appropriate injectors and both fluids are produced from their respective deep reservoir producers to generate electricity. In the other scenario, only preheated brine is injected into, and produced from, the deep reservoir for electric power generation. We find that integrated, vertically stacked, multi-fluid geothermal systems can result in improved system efficiency when power plant lifespans exceed ~30 years. In addition, preheating of brine before deep injection reduces brine overpressurization in the deep reservoir, reducing the risk of fluid-induced seismicity. Furthermore, CO2-Plume Geothermal (CPG) power plants in general, and the multi-fluid, multi-level geothermal system described here in particular, assign a value to CO2, which in turn may partially or fully offset the high costs of carbon capture at fossil-energy power plants and of CO2 injection, thereby facilitating economically feasible carbon capture and storage (CCS) operations that render fossil-energy power plants green. From a geothermal power plant perspective, the system results in a CO2 sequestering geothermal power plant with a negative carbon footprint. Finally, energy return on well costs and operational flexibility can be greater for integrated geothermal reservoirs, providing additional options for bulk and thermal energy storage, compared to equivalent, but separately operated reservoirs. System economics can be enhanced by revenues related to efficient delivery of large-scale bulk energy storage and ancillary services products (frequency regulation, load following, and spinning reserve), which are essential for electric grid integration of intermittently available renewable energy sources, such as wind and solar. These capabilities serve to stabilize the electric grid and promote development of all renewable energies, beyond geothermal energy. Numerical Study of Multi-Fluid and Multi-Level Geothermal System Performance (PDF Download Available). Available from: https://www.researchgate.net/publication/274138343_Numerical_Study_of_Multi-Fluid_and_Multi-Level_Geothermal_System_Performance [accessed Jun 12, 2017].
Garapati, N., J.B. Randolph, and M.O. Saar, Superheating Low-Temperature Geothermal Resources to Boost Electricity Production, Proc. of the 40th Geothermal Reservoir Engineering Workshop 2015 Stanford, CA, USA, January 26-28, 2015, Proceedings of the 40th Workshop on Geothermal Reservoir Engineering 2015, 2, pp. 1210-1221, 2015. [Download PDF] [View Abstract]Low-temperature geothermal resources (<150°C) are typically more effective for direct use, i.e., district heating, than for electricity production. District or industrial heating, however, requires that the heat resource is close to residential or industrial demands in order to be efficient and thus economic. However, if a low-temperature geothermal resource is combined with an additional or secondary energy source that is ideally renewable, such as solar, biomass, biogas, or waste heat, but could be non-renewable, such as natural gas, the thermodynamic quality of the energy source increases, potentially enabling usage of the combined energy sources for electricity generation. Such a hybrid geothermal power plant therefore offers thermodynamic advantages, often increasing the overall efficiency of the combined system above that of the additive power output from two stand-alone, separate plants (one using geothermal energy alone and the other using the secondary energy source alone) for a wide range of operating conditions. Previously, fossil superheated and solar superheated hybrid power plants have been considered for brine/water based geothermal systems, especially for enhanced geothermal systems. These previous studies found, that the cost of electricity production can typically be reduced when a hybrid plant is operated, compared to operating individual plants. At the same time, using currently-available high-temperature energy conversion technologies reduces the time and cost required for developing other less-established energy conversion technologies. Adams et al. (2014) found that CO 2 as a subsurface working fluid produces more net power than when brine systems are employed at low to moderate reservoir depths, temperatures, and permeabilities. Therefore in this work, we compare the performance of hybrid geothermal power plants that use brine or, importantly, CO 2 (which constitutes the new research component) as the subsurface working fluid, irrespective of the secondary energy source used for superheating, over a range of parameters. These parameters include geothermal reservoir depth and superheated fluid temperature before passing through the energy conversion system. The hybrid power plant is modeled using two software packages: 1) TOUGH2 (Pruess, 2004), which is employed for the subsurface modeling of geothermal heat and fluid extraction as well as for fluid reinjection into the reservoir, and 2) Engineering Equation Solver (EES), which is used to simulate well bore fluid flow and surface power plant performance. We find here that for geothermal systems combined with a secondary energy source (i.e., a hybrid system), the maximum power production for a given set of reservoir parameters is highly dependent on the configuration of the power system. The net electricity production from a hybrid system is larger than that from the individual plants combined for all scenarios considered for brine systems and for low-grade secondary energy resources for CO 2 based geothermal systems. Superheating of Low-Temperature Geothermal Working Fluids to Boost Electricity Production: Comparison between Water and CO2 Systems (PDF Download Available). Available from: https://www.researchgate.net/publication/271702360_Superheating_of_Low-Temperature_Geothermal_Working_Fluids_to_Boost_Electricity_Production_Comparison_between_Water_and_CO2_Systems [accessed Jun 12, 2017].
Garapati, N., J.B. Randolph, J.L. Valencia Jr., and M.O. Saar, Design of CO2-Plume Geothermal (CPG) subsurface system for various geologic parameters, Proc. of the Fifth International Conference on Coupled Thermo-Hydro-Mechanical-Chemical (THMC) Processes in Geosystems: Petroleum and Geothermal Reservoir Geomechanics and Energy Resource Extraction Salt Lake City, UT, 2015, Proceedings of the Fifth International Conference on Coupled Thermo-Hydro-Mechanical-Chemical (THMC) Processes in Geosystems: Petroleum and Geothermal Reservoir Geomechanics and Energy Resource Extraction, 2015. [Download PDF] [View Abstract]Recent geotechnical research shows that geothermal heat can be efficiently mined by circulating carbon dioxide through naturally permeable rock formations -- a method called CO2 Plume Geothermal -- the same geologic reservoirs that are suitable for deep saline aquifer CO2 sequestration or enhanced oil recovery. This paper describes the effect of thermal drawdown on reservoir pressure buildup during sequestration operations, revealing that geothermal heat mining can decrease overpressurization by 10% or more. Geothermal Energy Production at Geologic CO2 Sequestration sites: Impact of Thermal Drawdown on Reservoir Pressure (PDF Download Available). Available from: https://www.researchgate.net/publication/273193986_Geothermal_Energy_Production_at_Geologic_CO2_Sequestration_sites_Impact_of_Thermal_Drawdown_on_Reservoir_Pressure [accessed Jun 12, 2017].
Bailey, P., J. Myre, S.C.D. Walsh, D.J. Lilja, and M.O. Saar, Accelerating Lattice Boltzmann Fluid Flow Simulations Using Graphics Processors, Proc. of the 38th International Conference on Parallel Processing (ICPP) , , IEEE, pp. 550-557, 2009. [Download PDF] [View Abstract]Lattice Boltzmann Methods (LBM) are used for the computational simulation of Newtonian fluid dynamics. LBM-based simulations are readily parallelizable; they have been implemented on general-purpose processors, field-programmable gate arrays (FPGAs), and graphics processing units (GPUs). Of the three methods, the GPU implementations achieved the highest simulation performance per chip. With memory bandwidth of up to 141 GB/s and a theoretical maximum floating point performance of over 600 GFLOPS, CUDA-ready GPUs from NVIDIA provide an attractive platform for a wide range of scientific simulations, including LBM. This paper improves upon prior single-precision GPU LBM results for the D3Q19 model by increasing GPU multiprocessor occupancy, resulting in an increase in maximum performance by 20%, and by introducing a space-efficient storage method which reduces GPU RAM requirements by 50% at a slight detriment to performance. Both GPU implementations are over 28 times faster than a single-precision quad-core CPU version utilizing OpenMP.
PROCEEDINGS NON-REFEREED
Ezzat, M., J. Börner, D. Vogler, V. Wittig, B. Kammermann, J. Biela, and M. O. Saar, Lithostatic Pressure Effects on the Plasma-Pulse Geo-Drilling (PPGD), 48 EPS Conference on Plasma Physics , 2022. [Download PDF] [View Abstract]Drilling cost is one of the main challenges facing the utilization of deep closed-loop geothermal systems, so-called Advanced Geothermal Systems (AGS). Plasma-Pulse GeoDrilling (PPGD) is a novel drilling technology that uses high-voltage electric pulses to damage the rock without mechanical abrasion. PPGD may reduce the drilling costs significantly compared to mechanical rotary drilling, according to a comparative analysis that assumes ambient operating conditions. However, the level of performance of PPGD under deep wellbore conditions of higher pressures and temperatures is still ambiguous. Therefore, this contribution presents preliminary experiment results from the laboratory that investigate the effect of high lithostatic pressures of up to 150 MPa, equivalent to a depth of ∼5.7 km, on the performance of PPGD.
Hau, K.P., F. Games, R. Lathion, and M.O. Saar, Modelling Potential Geological CO2 Storage combined with CO2-Plume Geothermal (CPG) Energy Extraction in Switzerland, International Petroleum Technology Conference 2022, 2022. [Download PDF] [View Abstract]For many CO2-emitting industrial sectors, such as the cement and chemical industry, Carbon, Capture and Storage (CCS) will be necessary to reach any set climate target. CCS on its own is a very cost-intensive technology. Instead of considering CO2 as a waste to be disposed of, we propose to consider CO2 as a resource. The utilisation of CO2 in so-called CO2 Plume Geothermal (CPG) systems generates revenue by extracting geothermal energy, while permanently storing CO2 in the geological subsurface. To the best of our knowledge, this pioneer investigation is the first CCUS simulation feasibility study in Switzerland. Among others, we investigated the concept of injecting and circulating CO2 for geothermal power generation purposes from potential CO2 storage formations (saline reservoirs) in the Western part of the Swiss Molasse Basin ("Muschelkalk" and "Buntsandstein" formation). Old 2D-seismic data indicates a potential anticline structure in proximity of the Ecl pens heat anomaly. Essentially, this conceptual study helps assessing it's potential CO2 storage capacity range and will be beneficial for future economical assessments. The interpretation of the intersected 2D seismic profiles reveals an apparent anticline structure that was integrated on a geological model with a footprint of 4.35 x 4.05 km2. For studying the dynamic reservoir behaviour during the CO2 circulation, we considered: (1) the petrophysical rock properties uncertainty range, (2) the injection and physics of a two-phase (CO2 and brine) fluid system, including the relative permeability characterisation, fluid model composition, the residual and solubility CO2 trapping, and (3) the thermophysical properties of resident-formation brine and the injected CO2 gas. Our study represents a first-order estimation of the expected CO2 storage capacity range at a possible anticline structure in two potential Triassic reservoir formations in the Western part of the Swiss Molasse Basin. Additionally, we assessed the effect of different well locations on CO2 injection operations. Our currently still-ongoing study will investigate production rates and resulting well flow regimes in a conceptual CO2 production well for geothermal energy production in the future. Nonetheless, our preliminary results indicate that, under ideal conditions, both reservoirs combined can store more than 8 Mt of CO2 over multiple decades of CCUS operation. From our results, we can clearly identify limiting factors on the overall storage capacity, such as for example the reservoir fluid pressure distribution and well operation constraints.
Brehme, M, M O Saar, E Slob, P Bombarda, H Maurer, F Wellmann, P Vardon, D Bruhn, and E Team, EASYGO-Efficiency and Safety in Geothermal Operations-A new Innovative Training Networ, EGU General Assembly Conference Abstracts, EGU21-15437, 2021.
Hau, K.P., F. Games, R. Lathion, and M.O. Saar, Modelling Potential Geological CO2 Storage combined with CO2-Plume Geothermal (CPG) Energy Extraction near Eclépens, Switzerland, 2nd Geoscience & Engineering in Energy Transition Conference, 2021, pp. 1-5, 2021. [Download PDF] [View Abstract]Isolating and permanently geologically storing CO2 in so-called Carbon Capture, and Storage (CCS) systems will play a key role for mitigating global climate change, particularly when considering that some industrial processes, e.g. cement manufacturing, inherently generate CO2. However, while CCS is a technologically feasible solution, it is currently prohibitively expensive at full commercial scale. Thus "subsidising" CCS by utilising captured CO2 while permanently storing it becomes necessary (CO2 Capture, Utilisation, and Storage (CCUS) concept). Combining CCS with geothermal energy production in so-called CO2-Plume Geothermal (CPG) energy systems is such a CCUS system. Moreover, the widespread implementation of CPG power plants would enable the utilisation of a reliable and carbon-neutral renewable energy source. The here-presented first-order conceptual study investigates the feasibility of the CPG CCUS technology in Triassic saline formations in the greater Eclépens area of Western Switzerland (Molasse Basin). We have built a simplified, literature-based geological computer model of a conceptual anticline structure near the Eclépens heat anomaly for this study. Dynamic reservoir simulations were conducted to estimate the general reservoir properties and to investigate the subsurface response to CO2 injection, storage, and circulation. This enables both, the estimation of the geological CO2 storage and the CPG- based power generation potential.
Rangriz Shokri, A., K.P. Hau, M.O. Saar, D. White, E. Nickel, G. Siddiqi, and R.J. Chalaturnyk, Modeling CO2 Circulation Test for Sustainable Geothermal Power Generation at the Aquistore CO2 Storage Site, Saskatchewan, Canada, 2nd Geoscience & Engineering in Energy Transition Conference, 2021, pp. 1-5, 2021. [Download PDF] [View Abstract]Over the past decade, geological storage of CO2, mostly in deep saline aquifers, has demonstrated a practical short-to-medium term means to partially meet the ambitious global commitments to climate change mitigation and net-zero carbon emission policies. As a key element of CO2 Plume Geothermal (CPG) systems, we examine the feasibility of running a CO2 circulation test utilizing an existing underground CO2 plume for synergistic utilization of the Aquistore site for both subsurface CO2 storage and geothermal power generation. In this work, we appraised the most probable realizations of CO2 plume extent from history matched numerical simulations and time-lapse seismic monitoring. We extracted and re-built a high-resolution sector model from a developed full geological model to represent the geology near the existing injection and observation wells. Given the extensive field evidence of CO2 arrival at the observation well, we performed uncertainty assessment of a CO2 circulation pilot test between the injector and the producer (i.e. observation well), followed by assessment of the resulting flow regimes during CO2/brine co-production. The findings of this paper assist in identifying the potential and limitations associated with conducting a CO2 circulation test and ultimately CPG operations at geologic CO2 storage sites such as Aquistore.
Hau, K.P., A. Rangriz Shokri, E. Nickel, R.J. Chalaturnyk, and M.O. Saar, On the Suitability of the Aquistore CCS-site for a CO2-Circulation Test, World Geothermal Congress 2020+1, 2020. [View Abstract]It is commonly known that a drastic decrease in global carbon dioxide (CO2) emissions is necessary, in order to reach the climate goals set by the Paris agreement in 2015. A key technology towards achieving that goal is CCUS - Carbon, Capture, Utilisation, and Sequestration. By using supercritical CO2 instead of brine/water as a geothermal working fluid, geothermal energy production can possibly be expanded to regions with lower heat gradients in subsurface formations, while permanently storing CO2 underground. This first-order, conceptual study investigates the suitability of the Aquistore CCS-site for a CO2-circulation pilot test. For doing so, numerical simulations were performed to learn about the site responses to CO2-circulation, the amount of back-produced CO2 versus brine, and to estimate the flow behaviour in a potential CO2 gas production well. A key requirement for a successful CO2-circulation pilot test is to prevent liquid loading in the CO2 gas production well. Liquid loading occurs if brine or water accumulates in the production well. It can be avoided by maintaining an annular flow regime in the multi-phase fluid production stream of the production well. The resulting flow regime is mainly controlled by the total mass flow rate of the production stream. This in turn strongly depends on the overall transmissivity of the reservoir. The obtained simulation results suggest that steady-state conditions will occur within days to a few weeks after the start of the CO2-circulation. Moreover, our results show that the amount of back-produced CO2 is one order of magnitude larger than the amount of back-produced brine. In the majority of cases, we observe that the back-produced fluid production stream will ultimately flow in an annular flow pattern. Further analysis of CO2-circulation results indicate a need to better characterize the subsurface multiphase fluid flow behaviour. To this end, attempts to constrain the uncertainty associated with the Aquistore reservoir characterization and CO2 plume growth through high-resolution history matching of non-isothermal injection data and time-lapse seismic monitoring surveys are discussed.
Ogland-Hand, J, J Bielicki, B Adams, T Buscheck, and M Saar, Using Sedimentary Basin Geothermal Resources to Provide Long-Duration Energy Storage, Proceedings World Geothermal Congress, 2020. [Download PDF]
Adams, B.M., M.O. Saar, J.M. Bielicki, J.D. Ogland-Hand, and M.R. Fleming, Using Geologically Sequestered CO2 to Generate and Store Geothermal Electricity: CO2 Plume Geothermal (CPG), Applied Energy Symposium: MIT A+B August 12-14, 2020, Cambridge, USA, 2020. [Download PDF] [View Abstract]CO2 Plume Geothermal (CPG) is a carbon neutral renewable electricity generation technology where geologic CO2 is circulated to the surface to directly generate power and then is reinjected into the deep subsurface. In contrast to traditional water geothermal power generation with an Organic Rankine Cycle (ORC), CPG has fewer system inefficiencies and benefits from the lower viscosity of subsurface CO2 which allows power generation at shallower depths, lower temperatures, and lower reservoir transmissivities.
Ezekiel, J., A. Ebigbo, B. Adams, and M.O. Saar, On the use of supercritical carbon dioxide to exploit the geothermal potential of deep natural gas reservoirs for power generation, European Geothermal Congress (EGC), Hague, Netherlands, 11-14 June 2019, 2019.
Hefny, M., C.-Z. Qin, A. Ebigbo, J. Gostick, M.O. Saar, and M. Hammed, CO2-Brine flow in Nubian Sandstone, Egypt: A Pore-Network Modeling using Computerized Tomography Imaging, European Geothermal Congress, The Hague, Netherlands, 11-14 June 2019, 2019.
Ahkami, M. , M.O. Saar, and X.-Z. Kong, Study on mineral precipitation in fractured porous media using Lattice-Boltzmann methods, European Geothermal Congress (EGC), Hague, Netherlands, 11-14 June 2019, 2019.
Lima, M.M.G., Ph. Schädle, D. Vogler, M.O. Saar, and X. Xiang-Zhao, Impact of Effective Normal Stress on Capillary Pressure in a Single Natural Fracture, European Geothermal Congress, The Hague, Netherlands, 11-14 June 2019, 2019.
Fleming, M.R., B.M. Adams, T.H. Kuehn, J.M. Bielicki, and M.O. Saar, Benefits of using active reservoir management during CO2-plume development for CO2-plume geothermal systems., 44th Workshop on Geothermal Reservoir Engineering, Stanford University, Stanford, CA, February 11-13, 2019., 2019.
Adams, B.M., M.R. Fleming, J.M. Bielicki, J. Hansper, S. Glos, M. Langer, M. Wechsung, and M.O. Saar, Grid scale energy storage using CO2 in sedimentary basins: the cost of power flexibility., European Geothermal Congress, The Hague, Netherlands, 11-14 June 2019, 2019.
Ogland-Hand, J.D., J.M. Bielicki, E.S. Nelson, B.M. Adams, T.A. Buscheck, M.O. Saar, and R. Sioshansi, Effects of Bulk Energy Storage in Sedimentary Basin Geothermal Resources on Transmission Constrained Electricity Systems , 43rd Workshop on Geothermal Reservoir Engineering, Stanford University, Stanford, California, February 12-14, 2018, 2018. [View Abstract]Sedimentary basin geothermal resources and carbon dioxide (CO2) can be used for bulk energy storage (CO2-BES), which could reduce the capacity, and thus cost, of high voltage direct current (HVDC) transmission infrastructure needed to connect high quality wind resources to distant load centers. In this study, we simulated CO2-BES operation in the Minnelusa Aquifer in eastern Wyoming and used those results in an optimization model to determine the impact that CO2-BES could have on the revenue of a wind farm that sells electricity to the California Independent System Operator (CAISO) market under varying HVDC transmission capacity scenarios. We found that the CO2-BES facility can dispatch more electricity than was previously stored because of the geothermal energy input. While CO2-BES performance degrades because of geothermal resource depletion, our results suggest that a CO2-BES facility could increase revenue from electricity sales throughout its lifetime by (1) increasing the utilization of HVDC transmission capacity, and (2) enabling arbitrage of the electricity prices in the CAISO market. In some cases, adding CO2-BES can provide more revenue with less HVDC transmission capacity.
Fleming, M.R., B.M. Adams, J.B. Randolph, J.D. Ogland-Hand, T.H. Kuehn, T.A. Buscheck, J.M. Bielicki, and M.O. Saar, High efficiency and large-scale subsurface energy storage with CO2., 43rd Workshop on Geothermal Reservoir Engineering, Stanford University, Stanford, CA, February 12-14, 2018., 2018. [Download PDF] [View Abstract]Storing large amounts of intermittently produced solar or wind power for later, when there is a lack of sunlight or wind, is one of society's biggest challenges when attempting to decarbonize energy systems. Traditional energy storage technologies tend to suffer from relatively low efficiencies, severe environmental concerns, and limited scale both in capacity and time. Subsurface energy storage can solve the drawbacks of many other energy storage approaches, as it can be large scale in capacity and time, environmentally benign, and highly efficient. When CO2 is used as the (pressure) energy storage medium in reservoirs underneath caprocks at depths of at least ~1 km (to ensure the CO2 is in its supercritical state), the energy generated after the energy storage operation can be greater than the energy stored. This is possible if reservoir temperatures and CO2 storage durations combine to result in more geothermal energy input into the CO2 at depth than what the CO2 pumps at the surface (and other machinery) consume. Such subsurface energy storage is typically also large scale in capacity (due to typical reservoir sizes, potentially enabling storing excess power from a substantial portion of the power grid) and in time (even enabling seasonal energy storage). Here, we present subsurface electricity energy storage with supercritical carbon dioxide (CO2) called CO2-Plume Geothermal Energy Storage (CPGES) and discuss the system's performance, as well as its advantages and disadvantages, compared to other energy storage options. Our investigated system consists of a deep and a shallow reservoir, where excess electricity from the grid is stored by producing CO2 from the shallow reservoir and injecting it into the deep reservoir, storing the energy in the form of pressure and heat. When energy is needed, the geothermally heated CO2 is produced from the deep reservoir and injected into the shallow reservoir, passing through a power generation system along the way. Thus, the shallow reservoir takes the place of a storage tank at the surface. The shallow reservoir well system is a huff-and-puff system to store the CO2 with as few heat and pressure losses as possible, whereas the deep reservoir has an injection and a production well, so the CO2 can extract heat as it passes through. We find that both the diurnal (daily) and seasonal (6 months) CPGES systems generate more electricity to the power grid than they store from it. The diurnal system has a ratio of generated electricity to stored electricity (called the Energy Storage Ratio) between 2.93 and 1.95. Similarly, the seasonal system has an energy storage ratio between 1.55 and 1.05, depending on operational strategy. The energy storage ratio decreases with duration due to the pump power needed to overcome the increasing reservoir pressures as CO2 is stored.
Hansper, J., S. Glos, M. Langer, M. Wechsung, B.M. Adams, and M.O. Saar, Assessment of performance and costs of CO2 plume geothermal (CPG) systems., European Geothermal Congress, Hague, Netherlands, 11-14 June 2019, 2018.
Ezekiel, J., A. Ebigbo, B.M. Adams, and M.O. Saar, On the use of supercritical carbon dioxide to exploit the geothermal potential of deep natural gas reservoirs for power generation., European Geothermal Congress, Hague, Netherlands, 11-14 June 2019, 2018.
Vogler, D., R.R. Settgast, C.S. Sherman, V.S. Gischig, J.A. Doetsch, M.R. Jalali, B. Valley, K.F. Evans, M.O. Saar, and F. Amann, Comparing Simulations and Experiments for Hydraulic Fracture Stimulations Performed at the Grimsel Test Site, Switzerland, Proc. of the 42nd Stanford Geothermal Workshop Palo Alto, CA, USA, February 13-15, 2017, 2017.
Kong, X.-Z., A.M.M. Leal, and M.O. Saar, Implications of hydrothermal flow-through experiments on deep geothermal energy utilization, European Geothermal Congress 2016, 2016. [View Abstract]Utilization of underground reservoirs for geothermal energy extraction, particularly using CO2 as a working fluid, requires an in-depth understanding of fluid, solute (e.g., dissolved CO2 and minerals), and energy (heat, pressure) transport through geologic formations. Such operations necessarily perturb the chemical, thermal, and/or pressure equilibrium between native fluids and rock minerals, potentially causing mineral dissolution and/or precipitation reactions with often immense consequences for fluid, solute, and energy transport, injectivity, and/or withdrawal in/from such reservoirs. The involved physico-chemico-thermo- mechanical processes often lead to modifications of permeability, one of the most variable and important parameters in terms of reservoir fluid flow and related advective solute/reactant and heat transport. Importantly, the amount of mineral dissolution/precipitation that can cause orders of magnitude in permeability reduction can be very small, if minerals are removed or deposited in pore throats or narrow fracture apertures. This potentially has detrimental consequences for geothermal energy usage. However, analysing, understanding, and predicting reservoir evolution and flow properties are non-trivial, as they depend on complex chemical, thermodynamic, and fluid-dynamic feedback mechanisms. To achieve these goals, it requires the integration and extrapolation of thermodynamic, kinetic, and hydrologic data from many disparate sources. The validity, consistency, and accuracy of these data- model combinations are unfortunately often incomparable due to the relative scarcity of appropriate parameterizations in the literature. Here, we present some results of hydrothermal flow-through experiments on rock core samples. During the experiments, we fixed the flow rates, confinement and outlet pore-fluid pressures, and recorded inlet pore- fluid pressure. We also analysed the outlet fluid chemistry samples throughout the experiments and imaged our rock cores before and after the flow- through experiments using X-Ray Computed Tomography (XRCT). With all these data, we are able to interpret the changes in permeability, porosity, and (reactive) surface area at the core scale.
Bielicki, J.M., B.M. Adams, H. Choi, B. Jamiyansuren, M.O. Saar, S.J. Taff, T.A. Buscheck, and J.D. Ogland-Hand, Sedimentary basin geothermal resource for cost-effective generation of renewable electricity from sequestered carbon dioxide., 41st Workshop on Geothermal Reservoir Engineering, Stanford University, Stanford, CA, February 22-24, 2016., 2016. [View Abstract]We investigated the efficacy of generating electricity using renewable geothermal heat that is extracted by CO2 that is sequestered in sedimentary basins, a process described as CO2 -Plume Geothermal (CPG) energy production. We developed an integrated systems model that combines power plant performance modeling, reservoir modeling, and the economic costs of a CPG power plant and a CO2 storage operation in order to estimate the levelized cost of electricity (LCOE). The integrated systems model is based on inverted fivespot injection patterns that are common in CO2-enhanced oil recovery operations. Our integrated systems model allows for these patterns to be coupled together, so that the CO2 that is extracted by a production well can be composed of portions of the CO2 that was injected in the four neighboring injection wells. We determined the diameter of the individual wells and the size coupled inverted fivespot well patterns that most effectively used the physical and economic economies of scale for the coupled reservoir and power plant. We found that substantial amounts of power, on the order of hundreds of megawatts, can be produced as the size of the injection pattern increases, and that the estimated LCOE decreases as these patterns expand. Given the appropriate combination of depth, geothermal gradient, and permeability, CPG power plants can have LCOEs that are competitive with other unsubsidized sources of electricity.
Kittilä, A., C. Deuber, G. Mikutis, K. Evans, M. Puddu, R.N. Grass, W.J. Stark, and M.O. Saar, Comparison of novel synthetic DNA nano-colloid tracer and classic solute tracer behaviour, Proc. of the European Geothermal Congress 2016 Strasbourg, 19-23 September, 2016. [Download PDF]
Buscheck, T.A., J.M. Bielicki, M. Chen, Y. Sun, Y. Hao, T.A. Edmunds, J.B. Randolph, and M.O. Saar, Multi-Fluid Sedimentary Geothermal Energy Systems for Dispatchable Renewable Electricity, Proceedings to the World Geothermal Congress, Melbourne, Australia, 19-25 April, 2015. [Download PDF] [View Abstract]Sedimentary geothermal resources typically have lower temperatures and energy densities than hydrothermal resources, but they often have higher permeability and larger areal extents. Consequently, spacing between injection and production wells is likely to be wider in sedimentary resources, which can result in more fluid pressure loss, increa sing the parasitic cost of powering the working fluid recirculation system, compared to hydrothermal systems . For hydrostatic geothermal resources , extracting heat requires that brine be lifted up production wells, such as with submersible pumps, which can consume a large portion of the electricity generated by the power plant. CO 2 is being considered as an alternative working fluid (also termed a supplemental fluid) because its advantageous thermophysical properties reduce this parasitic cost, and because of the synergistic benefit of geologic CO 2 sequestration (GCS). We expand on this idea by: (1) adding the option for multiple supplemental fluids (N 2 as well as CO 2 ) and injecting these fluids to create overpressured reservoir conditions , (2) utiliz ing up to three working fluids: brine, CO 2 , and N 2 for heat extraction, (3) using a well pattern designed to store supplemental fluid and pressure , and (4) time - shifting the parasitic load associated with fluid recirculation to provide ancillary services ( frequen cy regulat ion , load fo llowing , and spinning reserve) and bulk energy storage (BES) . Our approach uses concentric rings of horizontal wells to create a hydraulic divide to store supplemental fluid and pressure, much like a hydroelectric dam. While, as with any geothermal system, electricity production can be run as a base - load power source, p roduction wells can alternatively b e controlled like a spillway to supply power when demand is greatest. For conventional geothermal power, the parasitic power load for fluid recirculation is synchronous with gross power output. In contrast, our approach time - shift s much of this parasitic load, which is dominated by the power required to pressurize and inject brine . Th us, most of the parasitic load can be scheduled durin g minimum power demand or when, due to its inherent var iability, there is a surplus of renewable energy on the grid . Energy storage is almost 100 percent efficient because it is achieved by time - shifting the parasitic load. Consequently, net power can near ly equal gross power during peak demand so that geothermal energy can be used as a form of high - efficiency BES at large scales . A further benefit of our approach is that production rates (per well) can exceed the capacity of submersible pumps and thereby t ake advantage of the productivity of horizontal wells and better leverage we ll costs — which often constitute a major portion of capital costs . Our vision is a n efficient, dispatchable , renewable electricity system approach that facilitates deep market penet ration of all renewable energy sources: wind, solar, and geothermal, whi le utilizing and permanently storing CO 2 in a commercially viable manner.
Adams, B.M., T.H. Kuehn, J.B. Randolph, and M.O. Saar, The reduced pumping power requirements from increasing the injection well fluid density, Transactions - Geothermal Resources Council, 37, pp. 667-672, 2013. [Download PDF] [View Abstract]The reduction of parasitic loads is a key component to the operational efficiency of geothermal power plants, which include reductions in pump power requirements. Variations in fluid den - sity, as seen in CO 2 -based geothermal plants have resulted in the elimination of pumping requirements, known as a thermosiphon; this effect, while less pronounced, is also found in traditional brine geothermal systems. Therefore, we find the reductions in pumping power requirements for traditional 20 wt% NaCl brine and CO 2 geothermal power systems by increasing the injection fluid density. For a reduction in temperature of 1°C at a 15°C surface condition, a traditional brine system was found to require up to 2kWe less pumping power. A CO 2 system in the same condition was found to require up to 42 kWe less power. When the density of the injected brine was increased by increasing the salinity of the injected fluid to 21 wt% NaCl, the injection pumping requirement decreased as much as 45 kWe. Both distillation and reverse osmosis processes were simulated to increase the salinity while producing 7 kg s -1 fresh water. The pumping power reduction does not account for the increased energy cost of salination; however, this may still be economical in locations of water scarcity
Randolph, J.B., B.M. Adams, T.H. Kuehn, and M.O. Saar, Wellbore heat transfer in CO2-based geothermal systems, Geothermal Resources Council (GRC) Transactions, 36, pp. 549-554, 2012. [Download PDF] [View Abstract]Abstract Geothermal systems utilizing carbon dioxide as the subsurface heat exchange fluid in naturally porous, permeable geologic formations have been sho wn to provide improved geothermal heat energy extraction, even at low resource temperature s, compared to conventional hydrothermal and enhanced geothermal systems (EGS). Such systems , termed CO 2 Plume Geothermal (CPG), have the potential to permit expansion of geotherma l energy use while supporting rapid implementation. While most previous analyses have f ocused on heat transfer in the reservoir and surface components of CO 2 -based geothermal operations, here we examine wellb ore heat transfer. In particular, we explore the hypothesis that wellbore flow can be assumed to be adiabatic for the majority of a CPG facility's life span.
Saar, M.O., Geological Fluid Mechanics Models at Various Scales, Dissertation, University of California, Berkeley, 153 pp., 2003. [Download PDF] [View Abstract]In this dissertation, I employ concepts from fluid mechanics to quantitatively investigate geological processes in hydrogeology and volcanology. These research topics are addressed by utilizing numerical and analytical models but also by conducting field and lab work. Percolation theory is of interest to a wide range of physical sciences and thus warrants research in itself. Therefore, I developed a computer code to study percolation thresholds of soft-core polyhedra. Results from this research are applied to study the onset of yield strength in crystal-melt suspensions such as magmas. Implications of yield strength development in suspensions, marking the transition from Newtonian to Bingham fluid, include the pahoehoe-'a'a transition and the occurrence of effusive versus explosive eruptions. I also study interactions between volcanic processes and groundwater as well as between groundwater and seismicity (hydroseismicity). In the former case, I develop numerical and analytical models of coupled groundwater and heat transfer. Here, perturbations from a linear temperature-depth profile are used to determine groundwater flow patterns and rates. For the hydroseismicity project I investigate if seasonal elevated levels of seismicity at Mt. Hood, Oregon, are triggered by groundwater recharge. Both hydroseismicity and hydrothermal springs occur on the southern flanks of Mt. Hood. This suggests that both phenomena are related while also providing a connection between the research projects involving groundwater, heat flow, and seismicity. Indeed, seismicity may be necessary to keep faults from clogging thus allowing for sustained activity of hydrothermal springs. Finally, I present research on hydrologically induced volcanism, where a process similar to the one suggested for hydroseismicity is invoked. Here, melting of glaciers, or draining of lakes, during interglacial periods reduce the confining pressure in the subsurface which may promote dike formation and result in increased rates of volcanism. In general, problems discussed in this dissertation involve interactions among processes that are traditionally associated with separate research disciplines. However, numerous problems in the geosciences require a multidisciplinary approach, as demonstrated here. In addition, employing several analytical and numerical methods, such as signal processing, inverse theory, computer modeling, and percolation theory, allows me to study such diverse processes in a quantitative way.
Saar, M.O., The Relationship Between Permeability, Porosity, and Microstucture in Vesicular Basalts, MSc Thesis, University of Oregon, 101 pp., 1998. [Download PDF] [View Abstract]This thesis presents measurements of permeability, \(k\), porosity, \(\Phi\), and microstructural parameters of vesicular basalts. Measurements are compared with theoretical models. A percolation theory and a Kozeny-Carman model are used to interpret the measurements and to investigate relationships between porosity, microstructure, and permeability. Typical permeabilities for vesicular basalts are in the range of \(10^{-14}\) < \(k\) < \(10^{-10} m^2\). Best permeability estimates, following power laws predicted by percolation theory, are obtained when samples are used that show 'impeded aperture widening' due to rapid cooling and no bubble collapse (scoria and some flow samples). However, slowly cooled diktytaxitic samples contain elongated, 'collapsed' bubbles. Measurements indicate that the vesicle pathway network remains connected and preserves high permeabilities. Image-analysis techniques are unsuccessful if used for Kozeny-Carman equation parameter determination for vesicular materials, probably because the average interbubble aperture size that determines \(k\) is not resolved with such a technique.
Awards and Patents | CommonCrawl |
What does it really mean that photons are quanta of light?
Thread starter Ebi Rogha
photon quantum
Ebi Rogha
TL;DR Summary
How do you explain photons as quanta of light?
I thought of photons as quanta of light which are the smallest unit of light.
But then I learned a photon can be split into two or even three photons (red-shifted, energy is conserved), and also photon can lose energy and still be a photon (Raman effect, inelastic scattering). Now, I am not sure what it means when it is said photons are quanta of light (smallest unit of light).
Could somebody please enlighten me?
Likes vanhees71 and Delta2
PeroK
The simplest way to think about photons is that they are massless particles in the standard model of particle physics.
A photon may be emitted by an atom transitioning from one energy level to another, for example. In principle, such a photon could have any energy (depending on the atom and its energy spectrum). There is no minimum or maximum energy for photons.
Photons may also interact with matter (be absorbed or scattered). If a photon is scattered, it may lose or gain energy (in the same way as any particle could).
A full understanding of photons as the quanta of the EM field requires a knowldege of QED/QFT.
Likes Delta2
A. Neumaier
Ebi Rogha said:
Summary:: How do you explain photons as quanta of light?
It means nothing more than that one can describe light in quantum mechanical terms by thinking of it as a collection of photons.
I learned a photon can be split into two or even three photons (red-shifted, energy is conserved), and also photon can lose energy and still be a photon (Raman effect, inelastic scattering).
All elementary particles can lose or gain energy through interactions. Some of them can also decay into multiple other particles (and be detected in this way).
PeroK said:
There is no minimum or maximum energy for photons.
Are photons only defined for visible light?
If so, there should be a minimum and a maximum energy for photons (according to E=h. f).
Visibility is purely a function of the human eye. From radio waves to gamma rays it's all electromagnetism.
Likes Klystron and Ebi Rogha
vanhees71
The most important thing concerning photons is NOT to think about them as little billiard-ball-like localizable particles. They are rather certain states of the quantized free electromagnetic field, called "single-photon Fock states". You are right saying that they are "the smallest unit of light", but also this has to be taken in the right meaning: If you have electromagnetic radiation of a given frequency ##f## then the least amount of radiation energy that can be absorbed by some medium is given by ##h f##. In this sense a photon is indivisible, i.e., it can be detected as a whole or not at all. Nevertheless the photon itself is not localizable, it doesn't even have a position observable to begin with.
On the other hand you are also right in saying that there are processes, where photons are inelastically scattered (indeed Raman scattering on an atom is one possibility). Then, they change there energy and thus their frequency. There are also processes of "non-linear optics", where you can have a reaction, where one photon of a given frequency is absorbed by some medium and two or more photons are emitted by the medium. Of course again energy-momentum conservation ("phase-matching conditions" as the quantum opticians call it) have to hold.
Likes Klystron, weirdoguy, Delta2 and 1 other person
Nugatory
Here you are: http://www.physics.usu.edu/torre/3700_Spring_2015/What_is_a_photon.pdf
Any configuration of the electromagnetic field, whether static or dynamic, whether the nice (but not physically realizable) plane waves that we find as possible solutions to Maxwell's equations or the wave packets formed by superpositions of these plane waves, can be written as a sum of quantized excitations of the field. We call these excitations "photons" or "quanta of light"
Likes vanhees71 and PeroK
I'd define a photon as a single-photon Fock state. Of course, the plane-wave modes are no states, because they are not normalizable. But you can define a proper single-photon Fock state. It's a quantized wave packet, i.e., something like
$$|\Psi \rangle=\sum_{\lambda \in \{1,-1 \}} \int_{\mathbb{R}^3} \frac{\mathrm{d}^3 k}{\sqrt{(2 \pi)^3 2 \omega_k}} \Phi_{\lambda}(\vec{k}) \hat{a}^{\dagger}(\vec{k},\lambda) |\Omega \rangle,$$
where the ##\Phi_{\lambda}(\vec{k})## are some square-integrable functions and ##|\Omega \rangle## is the vacuum state; ##\lambda## labels the helicity (polarization) states.
Likes weirdoguy, gentzen and PeroK
weirdoguy
A. Neumaier said:
I would say that this description usually leads people astray because they visualise it the wrong way - that light and photons are like stream of water (say river) and water molecules.
weirdoguy said:
I visualize photons as being like waves on the stream formed by the electromagnetic field. For visualization and for their behavior in ordinary light this is fine. For their physical properties in the microscopic domain, this is only a crude approximation, like all visualization of quantum phenomena.
Nevertheless almost always it's better to think of photons as em. waves than as "particles". For photons one should abandon the particle picture from the very beginning, because it's pretty misleading. The only "particle like" feature is that it can only be either absorbed as a whole or not.
Likes EPR
trainman2001
Am I mistaken or have they "frozen" a laser pulse and photographed it? If that's true, the leading edge of that pulse would contain photons and since they were not moving would that not imply that we could "locate" them?
trainman2001 said:
the leading edge of that pulse would contain photons
And that is what I was talking about Photons are not like little water molecules that would be contained in any edge of a wave. But I don't know the details of how the notion of photons apply to this particular state of EM field.
The em. field transmitted by a laser is far from the "naive photon picture". It's really very much a "wave manifestation" of the em. field, known as a coherent (or squeezed) state, i.e., a coherent superposition of photon states with arbitrary number. The photon number for this state is Poisson distributed.
The modern definition of a photon is within QED, and it's an asymptotic free single-quantum state of the em. field (i.e., a Fock state with determined photon number 1).
Besides the photon in this sense is very far from the naive photon picture of Einstein's early 1905 paper, because for this "modern photon" you cannot even define a position observable.
Interested_observer
vanhees71 said:
And that it can impart momentum?
Sure, the electromagnetic field carries energy, momentum, and angular momentum. Classical point particles are a very murky concept anyway (at least t in relativity). It's good to get used to field concepts as soon as possible!
Suggested for: What does it really mean that photons are quanta of light?
B Photon-Electron-Quanta
I Does entanglement of photons really facilitate telepathy?
I What is the probability of an electron emitting a photon?
I Polarization of photons quantum mechanically
I What does recoupling mean?
A What do physicists mean when they say photons have a "path"?
B What does it mean that a spin 1/2 particle needs two full rotations?
I How indistinguishable are photons?
B What is a photon's structure composed of?
Friday, 12:33 PM
I Do atoms recoil when emitting a photon?
Friday, 11:08 AM
B Destructive Interference
I Curious about an idea of a modified polariser to send signals with QE
I What does decoherence have to do with phases?
A Did rotating polarizer show violations of Bell's Inequality?
B Why does anyone think gravity might collapse wave function? | CommonCrawl |
Journal Home About Issues in Progress Current Issue All Issues Feature Issues
pp. 34312-34322
•https://doi.org/10.1364/OE.27.034312
Probe misalignment calibration in fiber point-diffraction interferometer
Daodang Wang, Zhongmin Xie, Chao Wang, Jian Liang, Heng Wu, and Ming Kong
Daodang Wang,1,2,* Zhongmin Xie,1 Chao Wang,1,3 Jian Liang,2 Heng Wu,2 and Ming Kong1
1College of Metrology and Measurement Engineering, China Jiliang University, Hangzhou 310018, China
2James C. Wyant College of Optical Sciences, University of Arizona, Tucson, Arizona 85721, USA
3Guangxi Key Laboratory of Optoelectronic Information Processing, Guilin University of Electronic Technology, Guilin 541004, China
*Corresponding author: [email protected]
D Wang
Z Xie
C Wang
J Liang
H Wu
M Kong
Daodang Wang, Zhongmin Xie, Chao Wang, Jian Liang, Heng Wu, and Ming Kong, "Probe misalignment calibration in fiber point-diffraction interferometer," Opt. Express 27, 34312-34322 (2019)
High-precision method for submicron-aperture fiber point-diffraction wavefront measurement
Daodang Wang, et al.
Opt. Express 24(7) 7079-7090 (2016)
Full-field analysis of wavefront errors in point diffraction interferometer with misaligned...
Fen Gao, et al.
High-NA fiber point-diffraction interferometer for three-dimensional coordinate measurement
Opt. Express 22(21) 25550-25559 (2014)
Instrumentation, Measurement, and Optical Sensors
Fizeau interferometry
Metrological instrumentation
Numerical simulation
Optical testing
Surface measurements
Revised Manuscript: October 25, 2019
Principle of fiber PDI and probe misalignment calibration
Numerical simulation and experimental results
The measuring probe integrated with multiple fiber point-diffraction sources can be applied to measure both the three-dimensional coordinates and highly accurate point-diffraction wavefront. The probe determines the achievable measurement accuracy of fiber point-diffraction interferometer (PDI), in which the fiber exit end plane is required to be parallel with the detector plane. The probe misalignment due to fabrication error could introduce significant measurement error. A high-precision method is proposed to calibrate the probe misalignment in fiber PDI, including the central positioning based on phase difference and tilt adjustment based on Zernike polynomials fitting. Both numerical simulation and experiments have been carried out to demonstrate the feasibility and accuracy of the proposed probe misalignment calibration method. The proposed method provides a feasible way to address the processing uncertainty on measuring probe in fiber PDI, and enables high-precision geometry alignment and misalignment calibration in the interferometric testing systems with case of no imaging lens.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
With the development of optical fabrication and testing, the interferometry has been widely applied in high-precision size measurement [1], three-dimensional (3D) positioning [2], optical surface and system testing [3,4], etc. Traditional interferometers, such as Fizeau interferometer and Twyman-Green interferometer, can be applied to achieve the high-precision wavefront testing in order of nanometers. The novel point-diffraction interferometer (PDI) has also been proposed to improve the testing accuracy better than sub-nanometers [5–8], and it further extends the application of interferometry in ultrahigh-precision testing. Different from traditional interferometers, PDI employs an ideal diffraction spherical wavefront from either a pinhole or optical fiber as the reference standard, rather than the standard lens, and it can achieve both high measurement accuracy and repeatability.
The PDI technology has been applied to achieve surface metrology [9,10], high-precision measurement of 3D absolute displacement [2,11–13] and diffraction wavefront error [7], etc. In the fiber PDI system, two fibers are integrated in measuring probe, and the diffraction spherical wavefronts generated from fiber sources interfere with each other. According to the phase distribution in point-diffraction interference field, the 3D coordinates of measuring probe can be precisely retrieved. Besides, a high-precision shearing-interferometry-based method, in which 3D coordinate reconstruction of measuring probe is performed to calibrate the systematic geometrical aberration, has been proposed to measure the point-diffraction wavefront [7]. In the 3D coordinate reconstruction of measuring probe, either in absolute displacement measurement or shearing wavefront measurement for point-diffraction wavefront retrieval, the fiber exit end plane (that is probe end face) is required to be parallel with detector plane. Otherwise, the probe misalignment could introduce significant measurement error. Traditionally, the spatial orientation of measuring probe can be evaluated and adjusted according to the light spots distribution. However, it cannot achieve high-precision misalignment calibration, and even no longer feasible in the case that the fibers integrated in probe are not parallel due to the fabrication error. A method based on ray tracing and digital image processing [14,15] can be applied to calibrate the systematic error introduced by misalignment, however, the tilt misalignment in fiber PDI system cannot be removed due to the unknown probe status.
The tilt misalignment between the detector plane and probe end face could introduce an additional optical path difference and systematic error in the measurement result. In this paper, a high-precision method based on phase difference and Zernike polynomials fitting is proposed to remove and calibrate the probe misalignment in fiber PDI for the measurement of 3D absolute displacement and shearing wavefront (which can be applied to retrieve the point-diffraction wavefront). In the proposed method, a central positioning method based on the rigorous model for phase difference is proposed to translate the probe to central optical axis, and then probe tilt is adjusted according to Zernike polynomials fitting to remove tilt misalignment. Section 2 presents the principle of the PDI, analysis of probe misalignment and the proposed calibration method. In Section 3, numerical simulation and experimental results are given to demonstrate the feasibility of the proposed method. Some conclusions are drawn in Section 4.
2. Principle of fiber PDI and probe misalignment calibration
2.1 System layout
Figure 1 shows the basic schematic diagram of fiber PDI system, which can be applied to measure both the 3D absolute displacement [11–13] and point-diffraction wavefront [7]. According to Fig. 1, multiple fiber-diffraction sources with certain lateral displacement are integrated in measuring probe, and the fiber exit ends are coplanar on the probe end face. The coherent laser beams are coupled into single-mode fibers, and point-diffraction waves generated from submicron-aperture exit ends of fibers interfere on CCD detector. According to the phase distribution in the interference field of point-diffraction waves, the 3D coordinates of probe with respect to CCD detector can be measured with iterative reconstruction algorithm. Besides, the measured 3D coordinates can also be applied to calibrate the geometrical aberration due to the lateral displacement of fiber exit ends [7] and achieve accurate shearing wavefront measurement, in which no imaging lens is utilized. To improve the reconstruction accuracy and accelerate retrieve speed, the end face of measuring probe is required to be parallel with detector plane, by which the number of iterative variables to be solved is reduced to achieve rapid measurement. Thus, a general and effective calibration method is required to remove and eliminate the probe misalignment.
Fig. 1. Schematic diagram of fiber PDI system.
Download Full Size | PPT Slide | PDF
2.2 Probe misalignment calibration method
Traditionally, the measuring probe can be adjusted according to bright spot distribution, by which the tilt error of fiber exit ends can be calibrated. For the ideal probe configuration, the fibers are parallel integrated in probe, and the midpoint of spot centers would remain stable when translating CCD detector back and forth along optical axis direction, as is shown in Fig. 2(a). However, the achievable calibration precision based on traditional image-processing-based method is very limited, for it is quite sensitive to image noise. Besides, it is not easy to obtain the ideally parallel fibers in the probe due to fabrication error, and the tilt misalignment could introduce significant measurement error, as is shown in Fig. 2(b).
Fig. 2. Probe misalignment based on spot distribution in (a) ideal case and (b) actual case.
Figure 3(a) shows the fringe pattern and corresponding light spots captured by CCD detector with actual measuring probe, in which long exposure time is set to exhibit the spot centers. According to Young's interference, the connecting line between two fiber exit ends (or light spots on detector plane) should be perpendicular to fringe direction when the fibers in probe are parallel, as is shown in Fig. 2(a). Due to the fiber tilt in measuring probe, the connecting line between captured light spots in actual case is not perpendicular to fringe direction, as is shown in Fig. 3(a). In this case, the probe adjustment according to traditional calibration based on spot distribution would lead to the final probe orientation being shown in Fig. 3(b). Though the midpoint of spot centers remains unchanged when translating the probe along optical axis direction, there could be end face tilt in the measuring probe, and a significant residual error would be introduced in 3D absolute displacement and shearing wavefront measurement. A general method, including the central positioning based on phase difference and tilt adjustment based on Zernike polynomials fitting, can be applied to achieve high-precision calibration of probe misalignment.
Fig. 3. Probe misalignment calibration based on spot-distribution method in experiment. (a) Spot distribution on fringe pattern, (b) actual probe orientation after calibration.
2.2.1 Central positioning based on phase difference
To achieve high-precision probe misalignment calibration, the central positioning based on phase difference is firstly carried out to translate measuring probe to the central position (optical axis) with respect to detector plane in lateral directions. Without loss of generality, we take the tilt misalignment about y-axis as the case to be analyzed. Figure 4(a) shows the schematic diagram of the proposed central positioning method based on phase difference. In Fig. 4(a), ${S_1}({x_1},{y_1},{z_1})$ and ${S_2}({x_2},{y_2},{z_2})$ refer to the fiber exit ends before the adjustment, ${S^{\prime}_1}({x^{\prime}_1},{y^{\prime}_1},{z^{\prime}_1})$ and ${S^{\prime}_2}({x^{\prime}_2},{y^{\prime}_2},{z^{\prime}_2})$ are those after 180° rotation of measuring probe with lateral translation; the distances between fiber exit ends (${S_1}$ and ${S_2}$) and an arbitrary point $P(x,y,z)$ on detector plane are ${R_1}$ and ${R_2}$, respectively, and those between fiber ends (${S^{\prime}_1}$ and ${S^{\prime}_2}$) and the point P are ${R^{\prime}_1}$ and ${R^{\prime}_2}$. According to Fig. 4(a), we have phase distributions $\varphi $ and $\varphi ^{\prime}$,
(1)$$\left\{ \begin{array}{l} \varphi ({{x_1},{y_1},{z_1};{x_2},{y_2},{z_2};x,y,z} ) = \frac{{2\pi }}{\lambda }[{{R_1}({{x_1},{y_1},{z_1};x,y,z} )- {R_2}({{x_2},{y_2},{z_2};x,y,z} )} ]\\ \varphi^{\prime}({{{x^{\prime}_1}},{{y^{\prime}_1}},{{z^{\prime}_2}};{{x^{\prime}_2}},{{y^{\prime}_2}},{{z^{\prime}_2}};x,y,z} ) = \frac{{2\pi }}{\lambda }[{{{R^{\prime}_1}}({{{x^{\prime}_1}},{{y^{\prime}_1}},{{z^{\prime}_1}};x,y,z} )- {{R^{\prime}_2}}({{{x^{\prime}_2}},{{y^{\prime}_2}},{{z^{\prime}_2}};x,y,z} )} ]\end{array} \right.,$$
where $\lambda$ is the operating laser wavelength in PDI system. To simplify the analysis, the midpoints of fiber exit ends before and after 180° rotation can be indicated as $M({x_\textrm{c}},{y_c},{z_c})$ and $M^{\prime}({x^{\prime}_\textrm{c}},{y^{\prime}_c},{z^{\prime}_c})$, respectively, and we have the corresponding coordinates of fiber ends,
(2)$$\begin{cases} [{{S_1}({x_c},{y_c} + d\cos \alpha ,{z_c} + d\sin \alpha ),\textrm{ }{S_2}({x_c},{y_c} - d\cos \alpha ,{z_c} - d\sin \alpha )} ]\\ [{{{S^{\prime}_1}}({{x^{\prime}_c}},{{y^{\prime}_c}} - d \cdot \cos \alpha ,{{z^{\prime}_c}} - d \cdot \sin \alpha ),\textrm{ }{{S^{\prime}_2}}({{x^{\prime}_c}},{{y^{\prime}_c}} + d\cos \alpha ,{{z^{\prime}_c}} + d\sin \alpha )} ]\end{cases} ,$$
where $2d$ is the distance between two fiber exit ends and α is the tilt angle of probe end face. The distance $2d$ could influence the measurement range [11] and introduce various geometrical aberrations [7], respectively in the 3D coordinate measurement and shearing point-diffraction wavefront measurement with fiber PDI.
Fig. 4. Schematic diagram of central positioning. Probe end face (a) before and (b) after central positioning.
According to Eq. (1), we may define the phase difference $\Delta \varphi = \varphi - \varphi ^{\prime}$ between the phase distributions $\varphi (x,y,z)$ and $\varphi ^{\prime}(x,y,z)$, which could be applied to evaluate the symmetry of the probe position before and after 180° rotation. By translating the probe in y direction (similar in x direction), the central positioning of probe in y direction can be achieved when ${x_\textrm{c}} = {x^{\prime}_\textrm{c}}$, ${y_\textrm{c}} = { - }{y^{\prime}_c}$ and ${z_\textrm{c}} = {z^{\prime}_\textrm{c}}$, as is shown in Fig. 4(b), and we have
(3)$$\Delta \varphi (x,y,z) = { - }\Delta \varphi (x,{ - }y,z).$$
Thus, the relationship can be obtained that
(4)$$\varphi ({{x_1},{y_1},{z_1};{x_2},{y_2},{z_2};x,y,z} ) = \varphi ^{\prime}({{x_1}, - {y_1},{z_1};{x_2}, - {y_2},{z_2};x,{ - }y,z} ).$$
From Eq. (4), the probe position can be located according to the phase differences $\Delta \Phi = (\Delta {\varphi _x},\Delta {\varphi _y})$,
(5)$$\left\{ \begin{array}{l} \Delta {\varphi_x} = \varphi ({{x_1},{y_1},{z_1};{x_2},{y_2},{z_2};x,y,z} )- \varphi^{\prime}({{{x^{\prime}_1}},{{y^{\prime}_1}},{{z^{\prime}_1}};{{x^{\prime}_2}},{{y^{\prime}_2}},{{z^{\prime}_2}};{ - }x,y,z} )\\ \Delta {\varphi_y} = \varphi ({{x_1},{y_1},{z_1};{x_2},{y_2},{z_2};x,y,z} )- \varphi^{\prime}({{{x^{\prime}_1}},{{y^{\prime}_1}},{{z^{\prime}_1}};{{x^{\prime}_2}},{{y^{\prime}_2}},{{z^{\prime}_2}};x,{ - }y,z} )\end{array} \right..$$
By translating the probe to make both the phase differences $\Delta {\varphi _x}$ and $\Delta {\varphi _y}$ reach their minimum simultaneously, the probe can be located on the optical axis, based on which the adjustment of probe tilt can be performed to remove the misalignment. Generally, either the peak-to-valley (PV) or root mean square (RMS) value can be applied to evaluate the phase differences. In the propose method, the RMS value is adopted to minimize the effect of random noise in the measurement.
2.2.2 Tilt adjustment based on Zernike polynomials fitting
After locating the measuring probe at the central position with respect to detector plane in lateral directions, the tilt adjustment based on Zernike polynomials fitting can be carried out to remove probe misalignment. According to Fig. 4(b), we have the coordinates of two fiber exit ends, ${S_1}(0,d\cos \alpha ,{z_c} + d\sin \alpha )$ and ${S_2}(0,{ - }d\cos \alpha ,{z_c}{ - }d\sin \alpha )$, and the optical path difference $OPD(x,y,z)$ between ${S_1}$ and ${S_2}$ can be obtained in the cylindrical coordinate system ($x = r\cos \theta $, $y = r\sin \theta $) as
(6)$$\begin{aligned} OPD &= {z_c}\sqrt {1 + {{({r \mathord{\left/ {\vphantom {r {{z_c}}}} \right.} {{z_c}}})}^2} + {{({d \mathord{\left/ {\vphantom {d {{z_c}}}} \right.} {{z_c}}})}^2} + {{2d\sin \alpha } \mathord{\left/ {\vphantom {{2d\sin \alpha } {{z_c}}}} \right.} {{z_c}}} - {{2dr\cos \alpha \sin \theta } \mathord{\left/ {\vphantom {{2dr\cos \alpha \sin \theta } {{z_c}^2}}} \right.} {{z_c}^2}}} \\ & \quad - {z_c}\sqrt {1 + {{({r \mathord{\left/ {\vphantom {r {{z_c}}}} \right.} {{z_c}}})}^2} + {{({d \mathord{\left/ {\vphantom {d {{z_c}}}} \right.} {{z_c}}})}^2} - {{2d\sin \alpha } \mathord{\left/ {\vphantom {{2d\sin \alpha } {{z_c}}}} \right.} {{z_c}}} + {{2dr\cos \alpha \sin \theta } \mathord{\left/ {\vphantom {{2dr\cos \alpha \sin \theta } {{z_c}^2}}} \right.} {{z_c}^2}}} , \end{aligned}$$
where r and $\theta$ are the polar radius and angle on detector plane. Based on the Taylor expansion, the $OPD$ in Eq. (6) can be written as
(7)$$OPD = {z_c}[{{\zeta_2} - {{{\zeta_1}{\zeta_2}} \mathord{\left/ {\vphantom {{{\zeta_1}{\zeta_2}} 2}} \right.} 2} + {{{\zeta_2}({3\zeta_1^2 + \zeta_2^2} )} \mathord{\left/ {\vphantom {{{\zeta_2}({3\zeta_1^2 + \zeta_2^2} )} 8}} \right.} 8} - {{5{\zeta_1}{\zeta_2}({\zeta_1^2 + \zeta_2^2} )} \mathord{\left/ {\vphantom {{5{\zeta_1}{\zeta_2}({\zeta_1^2 + \zeta_2^2} )} {16}}} \right.} {16}}} ],$$
where ${\zeta _1} = {({r \mathord{\left/ {\vphantom {r {{z_c}}}} \right.} {{z_c}}})^2} + {({d \mathord{\left/ {\vphantom {d {{z_c}}}} \right.} {{z_c}}})^2}$ and ${\zeta _2} = {{2d\sin \alpha } \mathord{\left/ {\vphantom {{2d\sin \alpha } {{z_c}}}} \right.} {{z_c}}} - {{2dr\cos \alpha \sin \theta } \mathord{\left/ {\vphantom {{2dr\cos \alpha \sin \theta } {{z_c}^2}}} \right.} {{z_c}^2}}$. Denoting the radius of CCD plane as ${R_c}$ and the maximum numerical aperture as $NA$, we have the normalized radius $\rho = {r \mathord{\left/ {\vphantom {r {{R_c}}}} \right.} {{R_c}}}$ and $t = \tan ({\sin ^{ - 1}}NA) = {{{R_c}} \mathord{\left/ {\vphantom {{{R_c}} {{z_c}}}} \right.} {{z_c}}}$. According to the Zernike polynomials fitting, the $OPD$ can be approximated as
(8)$$OPD \cong {a_0} \cdot {Z_0} + {a_2} \cdot {Z_2}.$$
where ${Z_0} = 1$ and ${Z_2} = \rho \sin \theta $ refer to the piston and y-tilt terms, respectively, ${a_0}$ and ${a_2}$ are the corresponding coefficients. From Eqs. (6) and (8), we have the coefficient ${a_2}$ for y-tilt term,
(9)$$\begin{aligned} {a_2} &= 2td\cos \alpha \cdot ({1 + {t \mathord{\left/ {\vphantom {t 8}} \right.} 8} + {{{t^2}} \mathord{\left/ {\vphantom {{{t^2}} 3}} \right.} 3} - {{3{t^4}} \mathord{\left/ {\vphantom {{3{t^4}} {16}}} \right.} {16}} + {{{d^2}} \mathord{\left/ {\vphantom {{{d^2}} {2{z_c}}}} \right.} {2{z_c}}} - {{{t^2}{d^2}} \mathord{\left/ {\vphantom {{{t^2}{d^2}} {2z_c^2}}} \right.} {2z_c^2}} + {{15{t^4}{d^2}} \mathord{\left/ {\vphantom {{15{t^4}{d^2}} {32z_c^2}}} \right.} {32z_c^2}}} )\\ & \quad - {{t{d^3}\cos \alpha } \mathord{\left/ {\vphantom {{t{d^3}\cos \alpha } {z_c^2}}} \right.} {z_c^2}} \cdot ({3{{\sin }^2}\alpha + {{{t^2}{{\cos }^2}\alpha } \mathord{\left/ {\vphantom {{{t^2}{{\cos }^2}\alpha } 2}} \right.} 2} - 5{t^2}{{\sin }^2}\alpha - {{15{t^4}\cos {}^2\alpha } \mathord{\left/ {\vphantom {{15{t^4}\cos {}^2\alpha } {16}}} \right.} {16}}} ). \end{aligned}$$
According to Eq. (9), the tilt coefficient is an even function about tilt angle $\alpha $, and it reaches its maximum value when $\alpha = 0$. Thus, the adjustment of probe tilt can be performed according to the tilt coefficients, and tilt removal can be achieved when both the tilt coefficients in x and y directions reach their maximum.
Figure 5 shows the procedure for the proposed probe misalignment calibration method, including the central positioning based on phase difference and tilt adjustment based on Zernike polynomials fitting. The phase distribution $\varphi $ corresponding to the probe position $({x_c},{y_c},{z_c})$ can be obtained by phase-shifting method. The probe is 180° rotated about z-axis and then translated in lateral directions to acquire various phase distributions $\varphi ^{\prime}$, with which the phase differences $\Delta \Phi = (\Delta {\varphi _x},\Delta {\varphi _y})$ can be obtained to locate the probe position in x and y directions according to Section 2.2.1. The central position of probe with respect to detector plane can be located at the position $[{{({x_c}{ + }{{x^{\prime}_c}})} \mathord{\left/ {\vphantom {{({x_c}{ + }{{x^{\prime}_c}})} 2}} \right.} 2},\textrm{ }{{({y_c} + {{y^{\prime}_c}})} \mathord{\left/ {\vphantom {{({y_c} + {{y^{\prime}_c}})} 2}} \right.} 2},{{\textrm{ }({z_c} + {{z^{\prime}_c}})} \mathord{\left/ {\vphantom {{\textrm{ }({z_c} + {{z^{\prime}_c}})} 2}} \right.} 2}]$, $({x^{\prime}_\textrm{c}},{y^{\prime}_c},{z^{\prime}_c})$ is the position where both the phase differences $\Delta {\varphi _x}$ and $\Delta {\varphi _y}$ reach their minimum, as is depicted in Fig. 4(b). After the central positioning of measuring probe based on phase difference, the tilt adjustment is carried out to remove probe misalignment. For the probe located at central position, the wavefront ${W^{({\boldsymbol \alpha })}}$ corresponding to the probe tilt angle position ${\boldsymbol \alpha } = ({\alpha _x},{\alpha _y})$ can be measured and fitted with Zernike polynomials, with the corresponding tilt coefficients ${\textbf T} = (a_1^{({\boldsymbol \alpha })},a_2^{({\boldsymbol \alpha })})$ in x and y directions. By adjusting probe tilt angle, the null-tilt angle position ${{\boldsymbol \alpha }_0}$ can be obtained with both the tilt coefficients ${\textbf T} = (a_1^{({{\boldsymbol \alpha }_0})},a_2^{({{\boldsymbol \alpha }_0})})$ reaching their maximum. Based on the calibration process mentioned above, the probe misalignment can be well calibrated to make the probe end face be parallel to detector plane.
Fig. 5. Procedure for the proposed probe misalignment calibration method.
3. Numerical simulation and experimental results
To validate the feasibility and accuracy of the proposed method for probe misalignment calibration in fiber PDI system, both the numerical simulation and experiments have been carried out.
3.1 Numerical simulation results
In the numerical simulation, a probe integrated with two fiber exit ends is modeled and the ray tracing method is applied to analyze the measurement results. The distance of two fiber exit ends is 125 µm, and the connecting line is perpendicular to x-axis and the tilt angle between the connecting line and xy plane is set to 3°. The numerical analysis of measurement errors for both the displacement and shearing wavefront under various probe tilt angles has been performed, in which the probe being placed at the position (0 mm, 0 mm, 100 mm), and Fig. 6 shows the corresponding results. Figure 6(a) is the displacement measurement errors corresponding to the 5 mm probe movement in x and z directions from the original position, respectively. Figure 6(b) presents the PV and RMS values of obtained shearing wavefront errors with various NAs, which is based on coordinate-reconstruction-based systematic geometrical aberration calibration. It can be seen from Fig. 6 that the measurement error grows significantly with the probe misalignment, both in the displacement and shearing wavefront measurement. For the 1° probe tilt angle, the maximum displacement measurement error can reach tens of microns, and the PV and RMS values of shearing measurement error with 0.10 NA are 103.3831 nm and 16.7280 nm, respectively. Thus, high-precision probe misalignment calibration is required to minimize the measurement error.
Fig. 6. Measurement error for displacement and shearing wavefront due to probe misalignment in simulation. (a) Measurement error for the displacement in x- and z-axes, (b) measurement error for shearing wavefront with removal of coordinate-reconstruction-based geometrical aberrations.
In the central positioning of probe based phase difference, Figs. 7(a) and 7(b) show the RMS values of the phase differences $(\Delta {\varphi _x},\Delta {\varphi _y})$ corresponding to various translation positions in x and y directions, respectively, where the probe is originally placed at the position (5 mm, 8 mm, 100 mm). According to Fig. 7, both the phase differences $\Delta {\varphi _x}$ and $\Delta {\varphi _y}$ reach their minimum when the 180° rotated probe is located at the symmetric position (−5 mm, −8 mm, 100 mm) in both x- and y-axes with respect to the original position.
Fig. 7. Central positioning in simulation. Phase difference corresponding to translation in (a) x and (b) y directions.
Based on the central positioning of probe, the tilt misalignment calibration based on Zernike polynomials fitting can be carried out according to Subsection 2.2.2. Figure 8(a) shows the Zernike tilt coefficients corresponding to various probe tilt angles in y direction, in which the tilt coefficient reaches its maximum as the tilt angle is 0°. Figure 8(b) presents the effects of central positioning error on the tilt misalignment calibration result. It can be seen from Fig. 8(b) that the tilt deviation grows linearly with central positioning error, with the corresponding sensitivity being about 8.1029×10−4 degree/µm. The probe tilt calibration accuracy can reach 2.9′′ for the 1 µm lateral positioning accuracy.
Fig. 8. Tilt adjustment in simulation. (a) Tilt coefficients corresponding to various adjusting angles, (b) residual probe tilt angle due to central positioning error.
3.2 Experimental results
According to Fig. 1, an experimental fiber PDI system for 3D absolute displacement measurement has been set up to validate the feasibility of the proposed probe misalignment calibration method. The pixel size of CCD sensor is 5.5 µm (H) × 5.5 µm (V), and the pixel number is 1920 (H) × 1080 (V). The distance between two fiber exiting ends integrated in measuring probe is 125.00 µm. To enable the precise adjustment of probe tilt and make probe end face be parallel with detector plane, the probe is installed on a precise tilt adjuster with the accuracy of 6'' and 3D linear stage with 1 µm positioning precision. The probe is placed at the original stage-coordinate position (6.0000 mm, 6.0000 mm, 2.0000 mm), corresponding to the actual distance about 100 mm away from the CCD camera.
3.2.1 Probe misalignment calibration
According to Subsection 2.2.1, the central positioning based on phase difference is firstly carried out to translate the probe to the central position with respect to the detector plane in lateral directions, and the corresponding interferograms are captured with CCD detector. By translating the probe in x and y directions, respectively, the corresponding measured phase differences are presented in Fig. 9, and the phase differences $\Delta {\varphi _x}$ and $\Delta {\varphi _y}$ reach their minimum at the positions x = −13.3926 mm and y = −7.0368 mm. Thus, the symmetric stage-coordinate position (−13.3926 mm, −7.0368 mm, 2.0000 mm) can be obtained, and the central position with respect to detector plane is located at (−3.6963 mm, −0.5184 mm, 2.0000 mm).
Fig. 9. Central positioning in experiment. Phase difference corresponding to translation in (a) x and (b) y directions.
After translating measuring probe to the central position, the tilt adjustment based on Zernike polynomials fitting according to Subsection 2.2.2 is performed to calibrate the misalignment. Figure 10 shows the Zernike tilt coefficients of measured wavefront corresponding to various adjusting tilt angles of probe. With the measured discrete data set, high-order polynomial fitting is applied to smooth the measured data and obtain the optimal maximum point. According to the six-order polynomial fitting curves shown in Figs. 10(a) and 10(b), the Zernike tilt coefficients in x and y directions reach their maximum at the adjusting angles 2.1306° and 1.9758° about x- and y-axes, respectively. By adjusting the tilt angle position of probe about x-axis to 2.1306° and that about y-axis to 1.9758°, the probe tilt can be well removed.
Fig. 10. Change of Zernike tilt coefficients with adjusting angle of probe in experiment: (a) x-tilt term corresponding to x-tilt adjustment, (b) y-tilt term corresponding to y-tilt adjustment.
3.2.2 Evaluation of probe misalignment calibration in PDI measurement
To analyze the effect of probe misalignment on the 3D coordinate measurement in fiber PDI [16] and the coordinate-reconstruction-based systematic geometrical aberration calibration in shearing wavefront measurement (especially in the case of high NA and large lateral displacement) [7], the comparison experiment before and after probe misalignment is performed.
In the 3D coordinate measurement with fiber PDI, an additional spatial tilt is introduced in measuring probe. The probe is moved from the original position (0 mm, 0 mm, 100 mm) by 50 mm distance in x, y and z directions at the step of 5 mm, respectively. Besides, the measurement with a high-precision coordinate measurement machine (CMM) (HEXAGON Leitz PMM-C, positioning accuracy 0.50 µm) is carried out for comparison, and the measured displacement is taken as the nominal value. Figures 11(a), 11(b) and 11(c) show the displacement measurement errors for the movement in x, y and z directions before and after probe misalignment calibration, respectively, and the corresponding RMS values are summarized in Table 1. According to Fig. 11 and Table 1, a significant systematic error with RMS value greater than 5 µm due to probe misalignment can be seen in the displacement measurement results. With the proposed misalignment calibration method, the systematic error can be well eliminated and the RMS value less than 1.5 µm is achieved, making an obvious improvement in measurement accuracy.
Fig. 11. Displacement measurement error in experiment. Measurement error for the displacement in (a) x-, (b) y- and (c) z-axes.
Table 1. Experimental results for displacement measurement about probe misalignment calibration
To further validate the 3D coordinate measurement accuracy on the basis of probe misalignment calibration, the coordinate-reconstruction-based systematic geometrical aberration calibration is performed in the shearing point-diffraction wavefront measurement with fiber PDI. Both the lateral and longitudinal displacements between two fiber exit ends may introduce significant geometrical aberrations in the measurement of shearing point-diffraction wavefront, and it places ultrahigh requirement on the systematic error calibration. The measuring probe is placed at the position about 25 mm away from the CCD camera, and the measured shearing wavefronts are shown in Fig. 12, whose PV and RMS values before probe misalignment calibration are 23.9595 nm and 3.8396 nm, respectively, while those after probe misalignment calibration are 8.0539 nm and 1.1908 nm. According to Fig. 12(a), an obvious systematic geometrical aberration due to 3D coordinate measurement error, which is introduced by probe misalignment, can be seen in the measured shearing wavefront. After calibrating the probe misalignment calibration, the RMS value of measured shearing wavefront decreases from 3.8396 nm to 1.1908 nm, as is shown in Fig. 12(b). Thus, the probe misalignment calibration enables the significant accuracy improvement for 3D coordinate measurement and the corresponding systematic geometrical aberration calibration.
Fig. 12. Measurement results of shearing wavefront with fiber PDI. Measured shearing wavefront with removal of coordinate-reconstruction-based geometrical aberrations (a) before and (b) after probe misalignment calibration.
Several factors may lead to the calibration error for the probe misalignment in fiber PDI, including the environmental disturbance, CCD noise and stage positioning error, etc. To minimize the effect of the environmental disturbance and CCD noise, the experimental setup is placed on an active vibration isolation table and shielded in a heat-insulating box, and the measurement data is averaged from multiple measurements; besides, high-order polynomial curve fitting is also applied to fit measured discrete data set. Due to the fact that the probe misalignment calibration is achieved on the basis of central positioning and tilt adjustment, both the positioning errors of 3D linear translating stage and tilt adjuster could introduce an additional error in the calibration result. By employing a multi-axis motion stage with higher positioning precision and resolution, the calibration accuracy can be expected by further improvement.
In this paper, we put forward a high-precision method to calibrate the tilt misalignment of measuring probe in fiber PDI, which can be applied to measure both the three-dimensional absolute displacement and highly accurate point-diffraction wavefront. The double-step calibration, including the central positioning of probe based on phase difference and tilt adjustment based on Zernike polynomials fitting, is performed to remove probe misalignment. The calibration can be carried out with six-axis motion stage, requiring no additional high-precision and costly measuring instruments. Both the numerical simulation and experiments have been performed to demonstrate the feasibility of the proposed calibration method, and a good measurement accuracy is achieved. The proposed method enables high-precision probe misalignment calibration and loosens the requirement on the processing of measuring probe in fiber PDI. It also provides a feasible way to align the system geometry and calibrate the geometrical aberration in the interferometric testing systems with case of no imaging lens, especially those with large wavefront displacement and high NA.
China Postdoctoral Science Foundation (2017M621928); National Natural Science Foundation of China (51775528, 61805048); Guangxi Key Laboratory of Optoelectronic Information Processing (GD18205).
1. J. Schmit and A. Olszak, "High-precision shape measurement by white-light interferometry with real-time scanner error correction," Appl. Opt. 41(28), 5943–5950 (2002). [CrossRef]
2. D. Wang, X. Chen, Y. Xu, F. Wang, M. Kong, J. Zhao, and B. Zhang, "High-NA fiber point-diffraction interferometer for three-dimensional coordinate measurement," Opt. Express 22(21), 25550–25559 (2014). [CrossRef]
3. J. C. Wyant, "Computerized interferometric surface measurements," Appl. Opt. 52(1), 1–8 (2013). [CrossRef]
4. P. de Groot, "Principles of interference microscopy for the measurement of surface topography," Adv. Opt. Photonics 7(1), 1–65 (2015). [CrossRef]
5. K. Otaki, T. Yamamoto, Y. Fukuda, K. Ota, I. Nishiyama, and S. Okazaki, "Accuracy evaluation of the point diffraction interferometer for extreme ultraviolet lithography aspheric mirror," J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. 20(1), 295–300 (2002). [CrossRef]
6. D. Wang, Y. Yang, C. Chen, and Y. Zhuo, "Point diffraction interferometer with adjustable fringe contrast for testing spherical surfaces," Appl. Opt. 50(16), 2342–2348 (2011). [CrossRef]
7. D. Wang, Y. Xu, R. Liang, M. Kong, J. Zhao, B. Zhang, and W. Li, "High-precision method for submicron-aperture fiber point-diffraction wavefront measurement," Opt. Express 24(7), 7079–7090 (2016). [CrossRef]
8. N. Voznesenskiy, M. Voznesenskaia, D. Jha, H. Ottevaere, M. Kujawińska, M. Trusiak, and K. Liżewski, "Revealing features of different optical shaping technologies by a point diffraction interferometer," Proc. SPIE 10329, 103293X (2017). [CrossRef]
9. S. W. Kim and B. C. Kim, "Point-diffraction interferometer for 3-D profile measurement of rough surfaces," Proc. SPIE 5191, 200–207 (2003). [CrossRef]
10. N. Voznesenskiy, M. Voznesenskaia, and D. Jha, "Testing high accuracy optics using the phase shifting point diffraction interferometer," Proc. SPIE 10829, 1082902 (2018). [CrossRef]
11. J. Chu and S.-W. Kim, "Absolute distance measurement by lateral shearing interferometry of point-diffracted spherical waves," Opt. Express 14(13), 5961–5967 (2006). [CrossRef]
12. H.-G. Rhee, J. Chu, and Y.-W. Lee, "Absolute three-dimensional coordinate measurement by the two-point diffraction interferometry," Opt. Express 15(8), 4435–4444 (2007). [CrossRef]
13. D. Wang, Z. Wang, R. Liang, M. Kong, J. Zhao, J. Zhao, L. Mo, and W. Li, "Fast searching measurement of absolute displacement based on submicron-aperture fiber point-diffraction interferometer," Proc. SPIE 10329, 1032937 (2017). [CrossRef]
14. T. Wei, D. Liu, C. Tian, L. Zhang, and Y. Y. Yang, "New interferometric method to locate aspheric in the partial null aspheric testing system," Proc. SPIE 8417, 84173E (2012). [CrossRef]
15. L. Zhang, D. Liu, T. Shi, Y. Yang, and Y. Shen, "Practical and accurate method for aspheric misalignment aberrations calibration in non-null interferometric testing," Appl. Opt. 52(35), 8501–8511 (2013). [CrossRef]
16. Z. C. Wang, D. D. Wang, Z. D. Gong, P. Xu, R. G. Liang, J. F. Zhao, and W. Li, "Measurement of absolute displacement based on dual-path submicron-aperture fiber point-diffraction interferometer," Optik 140, 802–811 (2017). [CrossRef]
J. Schmit and A. Olszak, "High-precision shape measurement by white-light interferometry with real-time scanner error correction," Appl. Opt. 41(28), 5943–5950 (2002).
D. Wang, X. Chen, Y. Xu, F. Wang, M. Kong, J. Zhao, and B. Zhang, "High-NA fiber point-diffraction interferometer for three-dimensional coordinate measurement," Opt. Express 22(21), 25550–25559 (2014).
J. C. Wyant, "Computerized interferometric surface measurements," Appl. Opt. 52(1), 1–8 (2013).
P. de Groot, "Principles of interference microscopy for the measurement of surface topography," Adv. Opt. Photonics 7(1), 1–65 (2015).
K. Otaki, T. Yamamoto, Y. Fukuda, K. Ota, I. Nishiyama, and S. Okazaki, "Accuracy evaluation of the point diffraction interferometer for extreme ultraviolet lithography aspheric mirror," J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. 20(1), 295–300 (2002).
D. Wang, Y. Yang, C. Chen, and Y. Zhuo, "Point diffraction interferometer with adjustable fringe contrast for testing spherical surfaces," Appl. Opt. 50(16), 2342–2348 (2011).
D. Wang, Y. Xu, R. Liang, M. Kong, J. Zhao, B. Zhang, and W. Li, "High-precision method for submicron-aperture fiber point-diffraction wavefront measurement," Opt. Express 24(7), 7079–7090 (2016).
N. Voznesenskiy, M. Voznesenskaia, D. Jha, H. Ottevaere, M. Kujawińska, M. Trusiak, and K. Liżewski, "Revealing features of different optical shaping technologies by a point diffraction interferometer," Proc. SPIE 10329, 103293X (2017).
S. W. Kim and B. C. Kim, "Point-diffraction interferometer for 3-D profile measurement of rough surfaces," Proc. SPIE 5191, 200–207 (2003).
N. Voznesenskiy, M. Voznesenskaia, and D. Jha, "Testing high accuracy optics using the phase shifting point diffraction interferometer," Proc. SPIE 10829, 1082902 (2018).
J. Chu and S.-W. Kim, "Absolute distance measurement by lateral shearing interferometry of point-diffracted spherical waves," Opt. Express 14(13), 5961–5967 (2006).
H.-G. Rhee, J. Chu, and Y.-W. Lee, "Absolute three-dimensional coordinate measurement by the two-point diffraction interferometry," Opt. Express 15(8), 4435–4444 (2007).
D. Wang, Z. Wang, R. Liang, M. Kong, J. Zhao, J. Zhao, L. Mo, and W. Li, "Fast searching measurement of absolute displacement based on submicron-aperture fiber point-diffraction interferometer," Proc. SPIE 10329, 1032937 (2017).
T. Wei, D. Liu, C. Tian, L. Zhang, and Y. Y. Yang, "New interferometric method to locate aspheric in the partial null aspheric testing system," Proc. SPIE 8417, 84173E (2012).
L. Zhang, D. Liu, T. Shi, Y. Yang, and Y. Shen, "Practical and accurate method for aspheric misalignment aberrations calibration in non-null interferometric testing," Appl. Opt. 52(35), 8501–8511 (2013).
Z. C. Wang, D. D. Wang, Z. D. Gong, P. Xu, R. G. Liang, J. F. Zhao, and W. Li, "Measurement of absolute displacement based on dual-path submicron-aperture fiber point-diffraction interferometer," Optik 140, 802–811 (2017).
Chen, C.
Chen, X.
Chu, J.
de Groot, P.
Fukuda, Y.
Gong, Z. D.
Jha, D.
Kim, B. C.
Kim, S. W.
Kim, S.-W.
Kong, M.
Kujawinska, M.
Lee, Y.-W.
Li, W.
Liang, R.
Liang, R. G.
Liu, D.
Lizewski, K.
Mo, L.
Nishiyama, I.
Okazaki, S.
Olszak, A.
Ota, K.
Otaki, K.
Ottevaere, H.
Rhee, H.-G.
Schmit, J.
Shen, Y.
Shi, T.
Tian, C.
Trusiak, M.
Voznesenskaia, M.
Voznesenskiy, N.
Wang, D.
Wang, D. D.
Wang, F.
Wang, Z.
Wang, Z. C.
Wei, T.
Wyant, J. C.
Xu, P.
Xu, Y.
Yamamoto, T.
Yang, Y.
Yang, Y. Y.
Zhang, B.
Zhang, L.
Zhao, J.
Zhao, J. F.
Zhuo, Y.
Adv. Opt. Photonics (1)
Appl. Opt. (4)
J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. (1)
Opt. Express (4)
Proc. SPIE (5)
Fig. 10.
View in Article | Download Full Size | PPT Slide | PDF
(1) { φ ( x 1 , y 1 , z 1 ; x 2 , y 2 , z 2 ; x , y , z ) = 2 π λ [ R 1 ( x 1 , y 1 , z 1 ; x , y , z ) − R 2 ( x 2 , y 2 , z 2 ; x , y , z ) ] φ ′ ( x 1 ′ , y 1 ′ , z 2 ′ ; x 2 ′ , y 2 ′ , z 2 ′ ; x , y , z ) = 2 π λ [ R 1 ′ ( x 1 ′ , y 1 ′ , z 1 ′ ; x , y , z ) − R 2 ′ ( x 2 ′ , y 2 ′ , z 2 ′ ; x , y , z ) ] ,
(2) { [ S 1 ( x c , y c + d cos α , z c + d sin α ) , S 2 ( x c , y c − d cos α , z c − d sin α ) ] [ S 1 ′ ( x c ′ , y c ′ − d ⋅ cos α , z c ′ − d ⋅ sin α ) , S 2 ′ ( x c ′ , y c ′ + d cos α , z c ′ + d sin α ) ] ,
(3) Δ φ ( x , y , z ) = − Δ φ ( x , − y , z ) .
(4) φ ( x 1 , y 1 , z 1 ; x 2 , y 2 , z 2 ; x , y , z ) = φ ′ ( x 1 , − y 1 , z 1 ; x 2 , − y 2 , z 2 ; x , − y , z ) .
(5) { Δ φ x = φ ( x 1 , y 1 , z 1 ; x 2 , y 2 , z 2 ; x , y , z ) − φ ′ ( x 1 ′ , y 1 ′ , z 1 ′ ; x 2 ′ , y 2 ′ , z 2 ′ ; − x , y , z ) Δ φ y = φ ( x 1 , y 1 , z 1 ; x 2 , y 2 , z 2 ; x , y , z ) − φ ′ ( x 1 ′ , y 1 ′ , z 1 ′ ; x 2 ′ , y 2 ′ , z 2 ′ ; x , − y , z ) .
(6) O P D = z c 1 + ( r / r z c z c ) 2 + ( d / d z c z c ) 2 + 2 d sin α / 2 d sin α z c z c − 2 d r cos α sin θ / 2 d r cos α sin θ z c 2 z c 2 − z c 1 + ( r / r z c z c ) 2 + ( d / d z c z c ) 2 − 2 d sin α / 2 d sin α z c z c + 2 d r cos α sin θ / 2 d r cos α sin θ z c 2 z c 2 ,
(7) O P D = z c [ ζ 2 − ζ 1 ζ 2 / ζ 1 ζ 2 2 2 + ζ 2 ( 3 ζ 1 2 + ζ 2 2 ) / ζ 2 ( 3 ζ 1 2 + ζ 2 2 ) 8 8 − 5 ζ 1 ζ 2 ( ζ 1 2 + ζ 2 2 ) / 5 ζ 1 ζ 2 ( ζ 1 2 + ζ 2 2 ) 16 16 ] ,
(8) O P D ≅ a 0 ⋅ Z 0 + a 2 ⋅ Z 2 .
(9) a 2 = 2 t d cos α ⋅ ( 1 + t / t 8 8 + t 2 / t 2 3 3 − 3 t 4 / 3 t 4 16 16 + d 2 / d 2 2 z c 2 z c − t 2 d 2 / t 2 d 2 2 z c 2 2 z c 2 + 15 t 4 d 2 / 15 t 4 d 2 32 z c 2 32 z c 2 ) − t d 3 cos α / t d 3 cos α z c 2 z c 2 ⋅ ( 3 sin 2 α + t 2 cos 2 α / t 2 cos 2 α 2 2 − 5 t 2 sin 2 α − 15 t 4 cos 2 α / 15 t 4 cos 2 α 16 16 ) .
James Leger, Editor-in-Chief
Experimental results for displacement measurement about probe misalignment calibration
Measurement error RMS before calibration (µm)
Measurement error RMS after calibration (µm)
Displacement in x axis 8.9289 4.6901 1.5628 0.9242 1.5025 0.3056
Displacement in y axis 1.4711 8.1521 3.0170 0.6300 0.7651 0.2593
Displacement in z axis 6.9910 2.2053 0.0125 0.8154 0.5159 0.0050 | CommonCrawl |
\begin{document}
\setcounter{page}{1}
\begin{bottomstuff} A preliminary version of this article appeared in {\it Proceedings of the 35$^{\text{th}}$ International Colloquium on Automata, Languages, and Programming (ICALP), Reykjavik, Iceland, July 7-11, 2008.}\newline Author's addresses: Bernhard Haeupler, CSAIL, Massachusetts Institute of Technology, Cambridge, MA 02139, United States, \email{[email protected]}; work done while the author was a visiting student at Princeton University. Telikepalli Kavitha, Tata Institute of Fundamental Research, Mumbai, India, \email{[email protected]}; work done while the author was at Indian Institute of Science. Rogers Mathew, Indian Institute of Science, Bangalore, India, \email{[email protected]}. Siddhartha Sen, Department of Computer Science, Princeton University, Princeton, NJ 08540, United States, \email{[email protected]}. Robert E. Tarjan, Department of Computer Science, Princeton University, Princeton, NJ 08540, United States and HP Laboratories, Palo Alto, CA 94304, United States, \email{[email protected]}.\newline Research at Princeton University partially supported by NSF grants CCF-0830676 and CCF-0832797. The information contained herein does not necessarily reflect the opinion or policy of the federal government and no official endorsement should be inferred. \end{bottomstuff} \title{Incremental Cycle Detection, Topological Ordering, and Strong Component Maintenance}
\section{Introduction} \label{sec:intro}
In this paper we consider three related problems on dynamic directed graphs: cycle detection, maintaining a topological order, and maintaining strong components. We begin with a few standard definitions. A {\em topological order} of a directed graph is a total order ``$<$'' of the vertices such that for every arc $(v, w)$, $v < w$. A directed graph is {\em strongly connected} if every vertex is reachable from every other. The {\em strongly connected components} of a directed graph are its maximal strongly connected subgraphs. These components partition the vertices \cite{Harary1965}. Given a directed graph $G$, its {\em graph of strong components} is the graph whose vertices are the strong components of $G$ and whose arcs are all pairs $(X, Y)$ with $X \ne Y$ such that there is an arc in the original graph from a vertex in $X$ to a vertex in $Y$. The graph of strong components is acyclic \cite{Harary1965}.
A directed graph has a topological order (and in general more than one) if and only if it is acyclic. The first implication is equivalent to the statement that every partial order can be embedded in a total order, which, as Knuth~\cite{Knuth1973} noted, was proved by Szpilrajn~\cite{Szpilrajn1930} in 1930, for infinite as well as finite sets. Szpilrajn remarked that this result was already known to at least Banach, Kuratowski, and Tarski, though none of them published a proof.
Given a fixed $n$-vertex, $m$-arc graph, one can find either a cycle or a topological order in $\mathrm{O}(n + m)$ time by either of two methods: repeated deletion of sources (vertices of in-degree zero)~\cite{Knuth1973,Knuth1974} or depth-first search \cite{Tarjan1972}. The former method (but not the latter) extends to the enumeration of all possible topological orders~\cite{Knuth1974}. One can find strong components, and a topological order of the strong components in the graph of strong components, in $\mathrm{O}(n + m)$ time using depth-first search, either one-way \cite{Cheriyan1996,Gabow2000,Tarjan1972} or two-way~\cite{Sharir1981,Aho1983}.
In some situations the graph is not fixed but changes over time. An {\em incremental} problem is one in which vertices and arcs can be added; a {\em decremental} problem is one in which vertices and arcs can be deleted; a {\em (fully) dynamic} problem is one in which vertices and arcs can be added or deleted. Incremental cycle detection or topological ordering occurs in circuit evaluation \cite{Alpern1990}, pointer analysis~\cite{Pearce2003}, management of compilation dependencies~\cite{Marchetti1993,Omohundro1992}, and deadlock detection \cite{Belik1990}. In some applications cycles are not fatal; strong components, and possibly a topological order of them, must be maintained. An example is speeding up pointer analysis by finding cyclic relationships~\cite{Pearce2003b}.
We focus on incremental problems. We assume that the vertex set is fixed and given initially, and that the arc set is initially empty. We denote by $n$ the number of vertices and by $m$ the number of arcs added. For simplicity in stating time bounds we assume that $m = \mathrm{\Omega}(n)$. We do not allow multiple arcs, so $m \le {n \choose 2}$. One can easily extend our algorithms to support vertex additions in $\mathrm{O}(1)$ time per vertex addition. (A new vertex has no incident arcs.) Our topological ordering algorithms, as well as all others in the literature, can handle arc deletions as well as insertions, since an arc deletion preserves topological order, but our time bounds are no longer valid. Maintaining strong components as arcs are deleted, or inserted and deleted, is a harder problem, as is maintaining the transitive closure of a directed graph under arc insertions and/or deletions. These problems are quite interesting and much is known, but they are beyond the scope of this paper. We refer the interested reader to Roditty and Zwick~\cite{RodittyZ2008} and the references given there for a thorough discussion of results on these problems.
Our goal is to develop algorithms for incremental cycle detection and topological ordering that are significantly more efficient than running an algorithm for a static graph from scratch after each arc addition. In Section~\ref{sec:lim-search} we discuss the use of graph search to solve these problems, work begun by Shmueli \cite{Shmueli1983} and realized more fully by Marchetti-Spaccamela et al.~\cite{Marchetti1996}, whose algorithm runs in $\mathrm{O}(nm)$ time. In Section~\ref{sec:2way-search} we develop a two-way search method that we call {\em compatible search}. Compatible search is essentially a generalization of two-way ordered search, which was first proposed by Alpern et al.~\cite{Alpern1990}. They gave a time bound for their algorithm in an incremental model of computation, but their analysis does not give a good bound in terms of $n$ and $m$. They also considered batched arc additions. Katriel and Bodlaender~\cite{Katriel2006} gave a variant of two-way ordered search with a time bound of $\mathrm{O}(\min\{m^{3/2}\log n, m^{3/2} + n^2\log n\})$. Liu and Chao~\cite{Liu2007} improved the bound of this variant to $\mathrm{\Theta}(m^{3/2} + mn^{1/2}\log n)$, and Kavitha and Mathew~\cite{Kavitha2007} gave another variant with a bound of $\mathrm{O}(m^{3/2} + nm^{1/2}\log n)$.
A two-way search need not be ordered to solve the topological ordering problem. We apply this insight in Section~\ref{sec:soft-search} to develop a version of compatible search that we call {\em soft-threshold search}. This method uses either median-finding (which can be approximate) or random sampling in place of the heaps (priority queues) needed in ordered search, resulting in a time bound of $\mathrm{O}(m^{3/2})$. We also show that any algorithm among a natural class of algorithms takes $\mathrm{\Omega}(nm^{1/2})$ time in the worst case. Thus for sparse graphs ($m/n = \mathrm{O}(1)$) our bound is best possible in this class of algorithms.
The algorithms discussed in Sections~\ref{sec:2way-search} and~\ref{sec:soft-search} have two drawbacks. First, they require a sophisticated data structure, namely a {\em dynamic ordered list}~\cite{Bender2002,Dietz1987}, to maintain the topological order. One can address this drawback by maintaining the topological order as an explicit numbering of the vertices from $1$ through $n$. Following Katriel \cite{Katriel2004}, we call an algorithm that does this a {\em topological sorting} algorithm. The one-way search algorithm of Marchetti-Spaccamela et al.~\cite{Marchetti1996} is such an algorithm. Pearce and Kelly~\cite{Pearce2006} gave a two-way-search topological sorting algorithm. They claimed it is fast in practice, although they did not give a good time bound in terms of $n$ and $m$. Katriel~\cite{Katriel2004} showed that any topological sorting algorithm that has a natural {\em locality} property takes $\mathrm{\Omega}(n^2)$ time in the worst case even if $m/n = \mathrm{\Theta}(1)$.
The second drawback of the algorithms discussed in Sections~\ref{sec:2way-search} and~\ref{sec:soft-search} is that using graph search to maintain a topological order becomes less and less efficient as the graph becomes denser. Ajwani et al.~\cite{Ajwani2006} addressed this drawback by giving a topological sorting algorithm with a running time of $\mathrm{O}(n^{11/4})$. In Section~\ref{sec:top-search} we simplify and improve this algorithm. Our algorithm searches the topological order instead of the graph. We show that it runs in $\mathrm{O}(n^{5/2})$ time. This bound may be far from tight. We obtain a lower bound of $\mathrm{\Omega}(n2^{\sqrt{2\lg n}})$ on the running time of the algorithm by relating its efficiency to a generalization of the $k$-levels problem of combinatorial geometry.
In Section~\ref{sec:strong} we extend the algorithms of Sections \ref{sec:soft-search} and~\ref{sec:top-search} to the incremental maintenance of strong components. We conclude in Section~\ref{sec:remarks} with some remarks and open problems.
This paper is an improvement and extension of a conference paper \cite{Haeupler2008b}, which itself is a combination and condensation of two on-line reports~\cite{Haeupler2008,Kavitha2007}. Our main improvement is a simpler analysis of the algorithm presented in Section~\ref{sec:top-search} and originally in \cite{Kavitha2007}. At about the same time as~\cite{Kavitha2007} appeared and also building on the work of Ajwani et al., Liu and Chao~\cite{Liu2008} independently obtained a topological sorting algorithm that runs in $\mathrm{O}(n^{5/2}\log^2 n)$ or $\mathrm{O}(n^{5/2}\log n)$ time, depending on the details of the implementation. More recently, Bender, Fineman, and Gilbert~\cite{Bender2009} have presented a topological ordering algorithm that uses completely different techniques and runs in $\mathrm{\Theta}(n^2\log n)$ time.
\section{One-Way Search} \label{sec:lim-search}
\SetKwFunction{limitedsearch}{Limited-Search}
The simplest of the three problems we study is that of detecting a cycle when an arc addition creates one. All the known efficient algorithms for this problem, including ours, rely on the maintenance of a topological order. When an arc $(v, w)$ is added, we can test for a cycle by doing a search forward from $w$ until either reaching $v$ (there is a cycle) or visiting all vertices reachable from $w$ without finding $v$. This method takes $\mathrm{\Theta}(m)$ time per arc addition in the worst case, for a total of $\mathrm{\Theta}(m^2)$ time. By maintaining a topological order, we can improve this method. When a new arc $(v, w)$ is added, test if $v < w$. If so, the order is still topological, and the graph is acyclic. If not, search for $v$ from $w$. If the search finishes without finding $v$, we need to restore topological order, since (at least) $v$ and $w$ are out of order. We can make the order topological by moving all the vertices visited by the search to positions after all the other vertices, and ordering the visited vertices among themselves topologically.
We need a way to represent the topological order. A simple numbering scheme suffices. Initially, number the vertices arbitrarily from $1$ through $n$ and initialize a global counter $c$ to $n$. When a search occurs, renumber the vertices visited by the search consecutively from $c + 1$, in a topological order with respect to the subgraph induced by the set of visited vertices, and increment $c$ to be the new maximum vertex number. One way to order the visited vertices is to make the search depth-first and order the vertices in reverse postorder \cite{Tarjan1972}. With this scheme, all vertex numbers are positive integers no greater than $nm$.
Shmueli \cite{Shmueli1983} proposed this method as a heuristic for cycle detection, although he used a more-complicated two-part numbering scheme and he did not mention that the method maintains a topological order. In the worst case, every new arc can invalidate the current topological order and trigger a search that visits a large part of the graph, so the method does not improve the $\mathrm{O}(m^2)$ worst-case bound for cycle detection. But it is the starting point for asymptotic improvement.
To do better we use the topological order to limit the searching. The search for $v$ from $w$ need not visit vertices larger than $v$ in the current order, since no such vertex, nor any vertex reachable from such a vertex, can be $v$. Here is the resulting method in detail. When a new arc $(v, w)$ has $v > w$, search for $v$ from $w$ by calling \limitedsearch{$v$,$w$}, where the function \limitedsearch is defined in Figure~\ref{alg:lim-search}. In this and later functions and procedures, a minus sign denotes set subtraction.
\begin{figure}
\caption{Implementation of limited search.}
\label{alg:lim-search}
\end{figure}
In \limitedsearch, $F$ is the set of vertices visited by the search, and $A$ is the set of arcs to be traversed by the search. An iteration of the while loop that deletes an arc $(x, y)$ from $A$ does a {\em traversal} of $(x, y)$. The choice of which arc in $A$ to traverse is arbitrary. If the addition of $(v, w)$ creates a cycle, \limitedsearch{$v$,$w$} returns an arc $(x, y) \ne (v, w)$ on such a cycle; otherwise, it returns null. If it returns null, restore topological order by moving all vertices in $F$ just after $v$ (and before the first vertex following $v$, if any). Order the vertices within $F$ topologically, for example by making the search depth-first and ordering the vertices in $F$ in reverse postorder with respect to the search. Figure~\ref{fig:lim-search} shows an example of limited search and reordering.
\begin{figure}
\caption{Limited search followed by vertex reordering. Initial topological order is left-to-right. Arcs are numbered in order of traversal; the search is depth-first. Visited vertices are $w$, $c$, $f$, $h$, $i$, $j$. They are numbered in reverse postorder with respect to the search and reordered correspondingly.}
\label{fig:lim-search}
\end{figure}
Before discussing how to implement the reordering, we bound the total time for the limited searches. If we represent $F$ and $A$ as linked lists and mark vertices as they are added to $F$, the time for a search is $\mathrm{O}(1)$ plus $\mathrm{O}(1)$ per arc traversal. Only the last search, which does at most $m$ arc traversals, can report a cycle. To bound the total number of arc traversals, we introduce the notion of {\em relatedness}. We define a vertex and an arc to be {\em related} if some path contains both the vertex and the arc, and {\em unrelated} otherwise. This definition does not depend on whether the vertex or the arc occurs first on the path; they are related in either case. If the graph is acyclic, only one order is possible, but in a cyclic graph, a vertex can occur before an arc on one path and after the arc on a different path. If either case occurs, or both, the vertex and the arc are related.
\begin{lemma} \label{lem:lim-search-rel} Suppose the addition of $(v, w)$ does not create a cycle but does trigger a search. Let $(x, y)$ be an arc traversed during the (unsuccessful) search for $v$ from $w$. Then $v$ and $(x, y)$ are unrelated before the addition but related after it.
\end{lemma}
\begin{proof} Let $<$ be the topological order before the addition of $(v, w)$. Since $x < v$, for $v$ and $(x, y)$ to be related before the addition there must be a path containing $(x, y)$ followed by $v$. But then there is a path from $x$ to $v$. Since there is a path from $w$ to $x$, the addition of $(v, w)$ creates a cycle, a contradiction. Thus $v$ and $(x, y)$ are unrelated before the addition. After the addition there is a path from $v$ through $(v, w)$ to $(x, y)$, so $v$ and $(x, y)$ are related. \qed
\end{proof}
The number of related vertex-arc pairs is at most $nm$, so the number of arc traversals during all limited searches, including the last one, is at most $nm + m$. Thus the total search time is $\mathrm{O}(nm)$.
Shmueli \cite{Shmueli1983} suggested this method but did not analyze it. Nor did he give an efficient way to do the reordering; he merely hinted that one could modify his numbering scheme to accomplish this. According to Shmueli, ``This may force us to use real numbers (not a major problem)." In fact, it {\em is} a major problem, because the precision required may be unrealistically high.
To do the reordering efficiently, we need a representation more complicated than a simple numbering scheme. We use instead a solution to the {\em dynamic ordered list} problem: represent a list of distinct elements so that order queries (does $x$ occur before $y$ in the list?), deletions, and insertions (insert a given non-list element just before, or just after, a given list element) are fast. Solving this problem is tantamount to addressing the precision question that Shmueli overlooked. Dietz and Sleator \cite{Dietz1987} gave two related solutions. Each takes $\mathrm{O}(1)$ time worst-case for an order query or a deletion. For an insertion, the first takes $\mathrm{O}(1)$ amortized time; the second, $\mathrm{O}(1)$ time worst-case. Bender et al. \cite{Bender2002} simplified the Dietz-Sleator methods. With any of these methods, the time for reordering after an arc addition is bounded by a constant factor times the search time, so $m$ arc additions take $\mathrm{O}(nm)$ time.
There is a simpler way to do the reordering, but it requires rearranging all {\em affected vertices}, those between $w$ and $v$ in the order (inclusive): move all vertices visited by the search after all other affected vertices, preserving the original order within each of these two sets. Figure~\ref{fig:lim-search-alt} illustrates this alternative reordering method. We call a topological ordering algorithm {\em local} if it reorders only affected vertices. Except for Shmueli's unlimited search algorithm and the recent algorithm of Bender et al.~\cite{Bender2009}, all the algorithms we discuss are local.
\begin{figure}
\caption{Alternative method of restoring topological order after a limited search of the graph in Figure~\ref{fig:lim-search}. The vertices are numbered in topological order. The affected vertices are $w$,$c$,$d$,$e$,$f$,$g$,$h$,$i$,$v$. Arcs are numbered in order of traversal. The affected vertices are reordered by moving the visited vertices $w$,$c$,$f$,$h$,$i$ after the unvisited vertices $d$,$e$,$g$,$v$.}
\label{fig:lim-search-alt}
\end{figure}
We can do this reordering efficiently even if the topological order is explicitly represented by a one-to-one mapping between the vertices and the integers from $1$ through $n$. This makes the method a topological sorting algorithm as defined in Section \ref{sec:intro}. This method was proposed and analyzed by Marchetti-Spaccamela et al. \cite{Marchetti1996}. The reordering time is $\mathrm{O}(n)$ per arc addition; the total time for $m$ arc additions is $\mathrm{O}(nm)$.
\section{Two-Way Search} \label{sec:2way-search}
\SetKwFunction{compatiblesearch}{Compatible-Search} \SetKwFunction{vertexguidedsearch}{Vertex-Guided-Search} \SetKwFunction{searchstep}{Search-Step}
We can further improve cycle detection and topological ordering by making the search two-way instead of one-way: when a new arc $(v, w)$ has $v > w$, concurrently search forward from $w$ and backward from $v$ until some vertex is reached from both directions (there is a cycle), or enough arcs are traversed to guarantee that the graph remains acyclic; if so, rearrange the visited vertices to restore topological order.
Each step of the two-way search traverses one arc $(u, x)$ forward and one arc $(y, z)$ backward. To make the search efficient, we make sure that these arcs are {\em compatible}, by which we mean that $u < z$ (in the topological order before $(v, w)$ is added). Here is the resulting method in detail. For ease of notation we adopt the convention that the minimum of an empty set is bigger than any other value and the maximum of an empty set is smaller than any other value. Every vertex is in one of three states: {\em unvisited}, {\em forward} (first visited by the forward search), or {\em backward} (first visited by the backward search). Before any arcs are added, all vertices are unvisited. The search maintains the set $F$ of forward vertices and the set $B$ of backward vertices: if the search does not detect a cycle, certain vertices in $B \cup F$ must be reordered to restore topological order. The search also maintains the set $A_F$ of arcs to be traversed forward and the set $A_B$ of arcs to be traversed backward. If the search detects a cycle, it returns an arc other than $(v,w)$ on the cycle; if there is no cycle, the search returns null.
When a new arc $(v, w)$ has $v > w$, search forward from $w$ and backward from $v$ by calling \compatiblesearch{$v$,$w$}, where the function \compatiblesearch is defined in Figure~\ref{alg:comp-search}.
\begin{figure}
\caption{Implementation of compatible search.}
\label{alg:comp-search}
\end{figure}
In compatible search, an iteration of the while loop is a {\em search step}. The step does a {\em forward traversal} of the arc $(u, x)$ that it deletes from $A_F$ and a {\em backward traversal} of the arc $(y, z)$ that it deletes from $A_B$. The choice of which pair of arcs to traverse is arbitrary, as long as they are compatible. If the addition of $(v, w)$ creates a cycle, it is possible for a single arc $(u, z)$ to be added to both $A_F$ (when $u$ becomes forward) and to $A_B$ (when $z$ becomes backward). It is even possible for such an arc to be traversed both forward and backward in the same search step, but if this happens it is the last search step. Such a double traversal does not affect the correctness of the algorithm. Unlike limited search, compatible search can visit unaffected vertices (those less than $w$ or greater than $v$ in topological order), but this does not affect correctness, only efficiency. If the search returns null, restore topological order as follows. Let $t =
\min(\{v\} \cup \{u| \exists (u, x) \in A_F \})$. Let $F_< = \{x \in F | x < t
\}$ and $B_> = \{y \in B | y > t\}$. If $t = v$, reorder as in limited search (Section~\ref{sec:lim-search}): move all vertices in $F_<$ just after $t$. (In this case $B_> = \{\}$.) Otherwise $(t < v)$, move all vertices in $F_<$ just before $t$ and all vertices in $B_>$ just before all vertices in $F_<$. In either case, order the vertices within $F_<$ and within $B_>$ topologically. Figure~\ref{fig:comp-search} illustrates compatible search and reordering.
\begin{figure}
\caption{ Compatible search of the graph in Figure~\ref{fig:lim-search} and restoration of topological order. Traversed arc pairs are numbered in order of traversal. Forward vertices are $w$,$c$,$f$,$h$,$i$; backward vertices are $v$,$d$,$g$,$e$,$a$. After the search, $A_F = \{(f,i), (h,j)\}$; $t = f$; $F_< = \{w,c\}$; $B_> = \{v,g\}$. Reordering moves vertices in $F_<$ just before $t$ and all vertices in $B_>$ just before those in $F_<$, arranging each internally in topological order.}
\label{fig:comp-search}
\end{figure}
\begin{theorem} \label{thm:2way-corr} Compatible search correctly detects cycles and maintains a topological order.
\end{theorem}
\begin{proof} The algorithm maintains the invariant that every forward vertex is reachable from $w$ and $v$ is reachable from every backward vertex. Thus if $(u, x)$ with $x \in B$ is traversed forward, there is a cycle consisting of a path from $w$ to $u$, the arc $(u, x)$, a path from $x$ to $v$, and the arc $(v, w)$. Symmetrically, if $(y,z)$ with $y \in F$ is traversed backward, there is a cycle. Thus if the algorithm reports a cycle, there is one.
Suppose the addition of $(v, w)$ creates a cycle. Such a cycle consists of a pre-existing path $P$ from $w$ to $v$ and the arc $(v, w)$. The existence of $P$ implies that $v > w$, so the addition of $(v, w)$ will trigger a search. The search maintains the invariant that either there are distinct arcs $(u, x)$ and $(y, z)$ on $P$ with $x \le y$, $(u, x)$ is in $A_F$, and $(y, z)$ is in $A_B$, or there is an arc $(u, z)$ in both $A_F$ and $A_B$. In either case there is a compatible arc pair, so the search can only stop by returning a non-null arc. Thus if there is a cycle the algorithm will report one.
It remains to show that if $v > w$ and the addition of $(v, w)$ does not create a cycle, then the algorithm restores topological order. This is a case analysis. First consider $(v, w)$. If $t = v$, then $w$ is in $F_<$. If $t < v$, then $v$ is in $B_>$ and $w$ is in $\{t\} \cup F_<$. In either case, $v$ precedes $w$ after the reordering.
Second, consider an arc $(x, y)$ other than $(v, w)$. Before the reordering $x < y$; we must show that the reordering does not reverse this order. There are five cases:
Case 1: neither $x$ nor $y$ is in $F_< \cup B_>$. Neither $x$ nor $y$ is reordered.
Case 2: $x$ is in $F_<$. Vertex $y$ must be forward. If $y < t$ then $y$ is in $F_<$, and the order of $x$ and $y$ is preserved because the reordering within $F_<$ is topological. If $y = t$, then $t < v$, so the reordering inserts $x$ before $t = y$. If $y > t$, the reordering does not move $y$ and inserts $x$ before $y$.
Case 3: $y$ is in $F_<$ but $x$ is not. Vertex $x$ is not moved, and $y$ follows $x$ after the reordering since vertices in $F_<$ are only moved higher in the order.
Case 4: $y$ is in $B_>$. Vertex $x$ must be backward. Then $x \ne t$, since $x = t$ would imply $t = v$ (since $x$ is backward) and $y > v$, which is impossible. If $x > t$ then $x$ is in $B_>$, and the order of $x$ and $y$ is preserved because the reordering within $B_>$ is topological. If $x < t$, the reordering does not move $x$ and inserts $y$ after $x$.
Case 5: $x$ is in $B_>$ but $y$ is not. Vertex $y$ is not moved, and $y$ follows $x$ after the reordering since vertices in $B_>$ are only moved lower in the order.
We conclude that the reordering restores topological order. \qed
\end{proof}
A number of implementation details remain to be filled in. Before doing this, we prove the key result that bounds the efficiency of two-way compatible search: the total number of arc traversals over $m$ arc additions is $\mathrm{O}(m^{3/2})$. To prove this, we extend the notion of relatedness used in Section~\ref{sec:lim-search} to arc pairs: two distinct arcs are {\em related} if they are on a common path. Relatedness is symmetric: the order in which the arcs occur on the common path is irrelevant. (In an acyclic graph only one order is possible, but in a graph with cycles both orders can occur, on different paths.) The following lemma is analogous to Lemma~\ref{lem:lim-search-rel}:
\begin{lemma} \label{lem:2way-search-rel} Suppose the addition of $(v, w)$ triggers a search but does not create a cycle. Let $(u, x)$ and $(y, z)$, respectively, be compatible arcs traversed forward and backward during the search, not necessarily during the same search step. Then $(u, x)$ and $(y, z)$ are unrelated before the addition of $(v, w)$ but are related after the addition.
\end{lemma}
\begin{proof} Since adding $(v, w)$ does not create a cycle, $(u, x)$ and $(y, z)$ must be distinct. Suppose $(u, x)$ and $(y, z)$ were related before the addition of $(v, w)$. Let $P$ be a path containing both. The definition of compatibility is $u < z$. But $u < z$ implies that $(u, x)$ precedes $(y, z)$ on $P$. Since $u$ is forward and $z$ is backward, the addition of $(v, w)$ creates a cycle, consisting of a path from $w$ to $u$, the part of $P$ from $u$ to $z$, a path from $z$ to $v$, and the arc $(v, w)$. This contradicts the hypothesis of the lemma. Thus $(u, x)$ and $(y, z)$ are unrelated before the addition of $(v, w)$.
After the addition of $(v, w)$, there is a path containing both $(u, x)$ and $(y, z)$, consisting of $(y, z)$, a path from $z$ to $v$, the arc $(v, w)$, a path from $w$ to $u$, and the arc $(u, x)$. Thus $(u, x)$ and $(y, z)$ are related after the addition. \qed
\end{proof}
\begin{theorem} \label{thm:2way-search-arcs} Over $m$ arc additions, two-way compatible search does at most $4m^{3/2} + m+1$ arc traversals.
\end{theorem}
\begin{proof} Only the last arc addition can create a cycle; the corresponding search does at most $m + 1$ arc traversals. (One arc may be traversed twice.) Consider any search other than the last. Let $A$ be the set of arcs traversed forward during the search. Let $k$ be the number of arcs in $A$. Each arc $(u, x)$ in $A$ has a distinct {\em twin} $(y, z)$ that was traversed backward during the search step that traversed $(u, x)$. These twins are compatible; that is, $u < z$. Order the arcs $(u, x)$ in $A$ in non-decreasing order on $u$. Each arc $(u, x)$ in $A$ is compatible not only with its own twin but also with the twin of each arc $(q, r)$ following $(u, x)$ in the order within $A$, because if $(y, z)$ is the twin of $(q, r)$, $u \le q < z$. Thus if $(u, x)$ is $i^{\text{th}}$ in the order within $A$, $(u, x)$ is compatible with at least $k - i + 1$ twins of arcs in $A$. By Lemma~\ref{lem:2way-search-rel}, each such compatible pair is unrelated before the addition of $(v, w)$ but is related after the addition. Summing over all arcs in $A$, we find that the addition of $(v, w)$ increases the number of related arc pairs by at least $k(k + 1)/2$.
Call a search other than the last one {\em small} if it does no more than $2m^{1/2}$ arc traversals and {\em big} otherwise. Since there are at most $m$ small searches, together they do at most $2m^{3/2}$ arc traversals. A big search that does $2k$ arc traversals is triggered by an arc addition that increases the number of related arc pairs by at least $k(k + 1)/2 > km^{1/2}/2$. Since there are at most ${m \choose 2} < m^2/2$ related arc pairs, the total number of arc traversals during big searches is at most $2m^{3/2}$. \qed
\end{proof}
The example in Figure~\ref{fig:comp-search} illustrates the argument in the proof of Theorem~\ref{thm:2way-search-arcs}. The arcs traversed forward, arranged in non-decreasing order by first vertex, are $(w, h)$ with twin $(d, v)$, $(w, c)$ with twin $(g, v)$, $(c, f)$ with twin $(a, d)$, and $(f, h)$ with twin $(e, g)$. Arc $(w, h)$ is compatible with the twins of all arcs in $A$, $(w, c)$ is compatible with its own twin and those of $(c, f)$ and $(f, h)$, $(c, f)$ is compatible with its own twin and that of $(f, h)$, and $(f, h)$ is compatible with its own twin. There can be other compatible pairs, and indeed there are in this example, but the proof does not use them.
Our goal now is to implement two-way compatible search so that the time per arc addition is $\mathrm{O}(1)$ plus $\mathrm{O}(1)$ per arc traversal. By Theorem~\ref{thm:2way-search-arcs}, this would give a time bound of $\mathrm{O}(m^{3/2})$ for $m$ arc additions. First we discuss the graph representation, then the maintenance of the topological order, and finally (in this and the next section) the detailed implementation of the search algorithm.
We represent the graph using forward and backward incidence lists: each vertex has a list of its outgoing arcs and a list of its incoming arcs, which we call the {\em outgoing list} and {\em incoming list}, respectively. Singly linked lists suffice. We denote by $\mathit{first\mbox{-}out}(x)$ and $\mathit{first\mbox{-}in}(x)$ the first arc on the outgoing list and the first arc on the incoming list of vertex $x$, respectively. We denote by $\mathit{next\mbox{-}out}((x, y))$ and $\mathit{next\mbox{-}in}((x, y))$ the arcs after $(x, y)$ on the outgoing list of $x$ and the incoming list of $y$, respectively. In each case, if there is no such arc, the value is null. Adding a new arc $(v, w)$ to this representation takes $\mathrm{O}(1)$ time. If the addition of an arc $(v, w)$ triggers a search, we can update the graph representation either before or after the search: arc $(v, w)$ will never be added to either $A_F$ or $A_B$.
We represent the topological order by a dynamic ordered list. (See Section~\ref{sec:lim-search}.) If adding $(v, w)$ leaves the graph acyclic but triggers a search, we reorder the vertices after the search as follows. Determine $t$. Determine the sets $F_<$ and $B_>$. Determine the subgraphs induced by the vertices in $F_<$ and $B_>$. Topologically sort these subgraphs using either of the two linear-time static methods (repeated deletion of sources or depth-first search). Move the vertices in $F_<$ and $B_>$ to their new positions using dynamic ordered list deletions and insertions. The number of vertices in $F \cup B$ is at most two plus the number of arcs traversed by the search. Furthermore, all arcs out of $F_<$ and all arcs into $B_>$ are traversed by the search. It follows that the time for the topological sort and reordering is at most linear in one plus the number of arcs traversed, not including the time to determine $t$. We discuss how to determine $t$ after presenting some of the details of the search implementation.
We want the time of a search to be $\mathrm{O}(1)$ plus $\mathrm{O}(1)$ per arc traversal. There are three tasks that are hard to implement in $\mathrm{O}(1)$ time: (1) adding arcs to $A_F$ and $A_B$ (the number of arcs added as the result of an arc traversal may not be $\mathrm{O}(1)$), (2) testing whether to continue the search, and (3) finding a compatible pair of arcs to traverse.
By making the search vertex-guided instead of arc-guided, we simplify all of these tasks, as well as the determination of $t$. We do not maintain $A_F$ and $A_B$ explicitly. Instead we partition $F$ and $B$ into {\em live} and {\em dead} vertices. A vertex in $F$ is live if it has at least one outgoing untraversed arc; a vertex in $B$ is live if it has at least one incoming untraversed arc; all vertices in $F \cup B$ that are not live are dead. For each vertex $x$ in $F$ we maintain a {\em forward pointer} $\mathit{out}(x)$ to the first untraversed arc on its outgoing list, and for each vertex $y$ in $B$ we maintain a {\em backward pointer} $\mathit{in}(y)$ to the first untraversed arc on its incoming list; each such pointer is null if there are no untraversed arcs. We also maintain the sets $F_L$ and $B_L$ of live vertices in $F$ and $B$, respectively. When choosing arcs to traverse, we always choose a forward arc indicated by a forward pointer and a backward arc indicated by a backward pointer. The test whether to continue the search becomes ``$\min F_L < \max B_L$.''
When a new arc $(v,w)$ has $v > w$, do the search by calling\hspace{4pt} \vertexguidedsearch{$v$,$w$}, where the function \vertexguidedsearch is defined in Figure~\ref{alg:vertex-search}. It uses an auxiliary macro \searchstep, defined in Figure~\ref{alg:search-step}, intended to be expanded in-line; each return from \searchstep returns from \vertexguidedsearch as well. If
\vertexguidedsearch{$v$,$w$} returns null, let $t = \min(\{v\} \cup \{x \in F | \mathit{out}(x) \ne null\}$ and reorder the vertices in $F_<$ and $B_>$ as discussed above.
\begin{figure}
\caption{Implementation of vertex-guided search.}
\label{alg:vertex-search}
\end{figure}
\begin{figure}
\caption{Implementation of a search step.}
\label{alg:search-step}
\end{figure}
If we represent $F$ and $B$ by singly linked lists and $F_L$ and $B_L$ by doubly linked lists (so that deletion takes $\mathrm{O}(1)$ time), plus flag bits for each vertex indicating whether it is in $F$ and/or $B$, then the time for a search step is $\mathrm{O}(1)$. The time to determine $t$ and to reorder the vertices is at most $\mathrm{O}(1)$ plus $\mathrm{O}(1)$ per arc traversal.
It remains to implement tasks (2) and (3): testing whether to continue the search and finding a compatible pair of arcs to traverse. In vertex-guided search these tasks are related: it suffices to test whether $\min F_L < \max B_L$; and, if so, to find $u \in F_L$ and $z \in B_L$ with $u < z$. The historical solution is to store $F_L$ and $B_L$ in heaps (priority queues), $F_L$ in a min-heap and $B_L$ in a max-heap, and in each iteration of the while loop to choose $u = \min F_L$ and $z = \max B_L$. This guarantees that $u < z$, since otherwise the continuation test for the search would have failed. With an appropriate heap implementation, the test $\min F_L < \max B_L$ takes $\mathrm{O}(1)$ time, as does choosing $u$ and $z$. Each insertion into a heap takes $\mathrm{O}(1)$ time as well, but each deletion from a heap takes $\mathrm{O}(\log n)$ time, resulting in an $\mathrm{O}(\log n)$ time bound per search step and an $\mathrm{O}(m^{3/2}\log n)$ time bound for $m$ arc additions.
This method is in essence the algorithm of Alpern et al.~\cite{Alpern1990}, although their algorithm does not strictly alternate forward and backward arc traversals, and they did not obtain a good total time bound. Using heaps but relaxing the alternation of forward and backward arc traversals gives methods with slightly better time bounds~\cite{Alpern1990,Katriel2006,Kavitha2007}, the best bound to date being $\mathrm{O}(m^{3/2} + nm^{1/2}\log n)$~\cite{Kavitha2007}. One can further reduce the running time by using a faster heap implementation, such as those of van Emde Boas~\cite{Boas1977b,Boas1977}, Thorup~\cite{Thorup2004}, and Han and Thorup~\cite{Han2002}. Our goal is more ambitious: to reduce the overall running time to $\mathrm{O}(m^{3/2})$ by eliminating the use of heaps. This we do in the next section.
\section{Soft-Threshold Search} \label{sec:soft-search}
\SetKwFunction{softthresholdsearch}{Soft-Threshold-Search}
To obtain a faster implementation of vertex-guided search, we exploit the flexibility inherent in the algorithm by using a {\em soft threshold} $s$ to help choose $u$ and $z$ in each search step. Vertex $s$ is a forward or backward vertex, initially $v$. We partition the sets $F_L$ and $B_L$ into {\em active} and {\em passive} vertices. Active vertices are candidates for the current search step, passive vertices are candidates for future search steps. We maintain the sets $F_A$ and $F_P$, and $B_A$ and $B_P$, of active and passive vertices in $F_L$ and $B_L$, respectively. All vertices in $F_P$ are greater than $s$; all vertices in $B_P$ are less than $s$; vertices in $F_A \cup B_A$ can be on either side of $s$. Searching continues while $F_A \ne \{\}$ and $B_A \ne \{\}$. The algorithm chooses $u$ from $F_A$ and $z$ from $B_A$ arbitrarily. If $u < z$, the algorithm traverses an arc out of $u$ and an arc into $z$ and makes each newly live vertex active. If $u > z$, the algorithm traverses no arcs. Instead, it makes $u$ passive if $u > s$ and makes $z$ passive if $z < s$; $u > z$ implies that at least one of $u$ and $z$ becomes passive. When $F_A$ or $B_A$ becomes empty, the algorithm updates $s$ and the vertex partitions, as follows. Suppose $F_A$ is empty; the updating is symmetric if $B_A$ is empty. The algorithm makes all vertices in $B_P$ dead, makes $s$ dead if it is live, chooses a new $s$ from $F_P$, and makes active all vertices $x \in F_P$ such that $x \le s$.
Here are the details of this method, which we call {\em soft-threshold search}. When a new arc $(v, w)$ has $v > w$, do the search by calling \softthresholdsearch{$v$,$w$}, where the function \softthresholdsearch is defined in Figure~\ref{alg:soft-search}, and procedure \searchstep is defined as in Figure~\ref{alg:search-step}, but with $F_A$ and $B_A$ replacing $F_L$ and $B_L$, respectively. If \softthresholdsearch{$v$,$w$} returns null, let $t =
\min(\{v\} \cup \{x \in F| \mathit{out}(x) \ne \textit{null}\}$ and reorder the vertices in $F_<$ and $B_>$ as discussed above. Figure~\ref{fig:soft-search} illustrates soft-threshold search.
\begin{figure}
\caption{Implementation of soft-threshold search.}
\label{alg:soft-search}
\end{figure}
\begin{figure}
\caption{ Soft-threshold search of the graph in Figure~\ref{fig:lim-search}. Arc traversal order is the same as in Figure~\ref{fig:comp-search}. Initially $s = v$, $F_A = \{w\}$, $B_A = \{v\}$. (a) Choosing $u = w$, $z = v$ twice causes traversal of compatible pair $(w, h)$, $(d, v)$ followed by traversal of $(w, c)$, $(g, v)$. Now $F_A = \{h, c\}$, $B_A = \{d, g\}$. Choice of $u = h$, $z = d$ moves $d$ to $B_P$. Choice of $u = h$, $z = g$ moves $g$ to $B_P$, making $B_A$ empty. (b) New $s$ is $d$. Now $F_A = \{h, c\}$, $F_P = \{\}$, $B_A = \{d, g\}$, $B_P = \{\}$. Choice of $u = h$, $z = d$ moves $h$ to $F_P$. Choice of $u = c$, $z = d$ causes traversal of $(c, f)$, $(a, d)$, adding $f$ to $F_A$ and deleting $c$ from $F_A$. (Vertex $a$ has no incoming arc, so it is not added to $B_A$.) Choice of $u = f$, $z = d$ moves $f$ to $F_P$, making $F_A$ empty. (c) New $s$ is $f$. Now $F_A = \{f\}$, $F_P = \{h\}$, $B_A = \{g\}$, $B_P = \{\}$. Choice of $u = f$, $z = g$ causes traversal of $(f, h)$, $(e, g)$, deleting $g$ from $B_A$ and making $B_A$ empty. Since $B_P$ is also empty, the search ends.
Reordering is the same as in Figure~\ref{fig:comp-search}. }
\label{fig:soft-search}
\end{figure}
Soft-threshold search is an implementation of vertex-guided search except that it makes additional vertices dead, not just those with no incident arcs left to traverse. Once dead, a vertex stays dead. We need to prove that this does not affect the search outcome. First we prove that soft-threshold search terminates.
\begin{theorem} \label{thm:soft-search-steps} A soft-threshold search terminates after at most $n^2 + m + n$ iterations of the while loop.
\end{theorem}
\begin{proof} Each iteration either traverses one or two arcs or makes one or two vertices passive. The number of times a vertex can become passive is at most the number of times it becomes active. Vertices become active only when they are visited (once per vertex) or when $s$ changes. Each time $s$ changes, the old $s$ becomes dead if it was not dead already. Thus $s$ changes at most $n$ times. The number of times vertices become active is thus at most $n + n^2$ (once per vertex visit plus once per vertex per change in $s$). \qed
\end{proof}
To prove correctness, we need two lemmas.
\begin{lemma} \label{lem:soft-search-passive} If $x$ is a passive vertex, $x > s$ if $x$ is in $F_P$, $x < s$ if $x$ is in $B_P$.
\end{lemma}
\begin{proof} If $x$ is a passive vertex, $x$ satisfies the lemma when it becomes passive, and it continues to satisfy the lemma until $s$ changes. Suppose $x$ is forward; the argument is symmetric if $x$ is backward. If $s$ changes because $F_A$ is empty, $x$ becomes active unless it is greater than the new $s$. If $s$ changes because $B_A$ is empty, $x$ becomes dead. The lemma follows by induction on the number of search steps. \qed
\end{proof}
\begin{lemma} \label{lem:soft-search-live} Let $A_F$ be the set of untraversed arcs out of vertices in $F$, let $A_B$
be the set of untraversed arcs into vertices in $B$, let $q = \min\{u| \exists (u, x) \in A_F\}$, and let $r = \max\{z| \exists (y, z) \in A_B\}$. Then $q$ and $r$ remain live vertices until $q > r$.
\end{lemma}
\begin{proof} If $q$ and $r$ remain live vertices until $q = \infty$ or $r = -\infty$, the lemma holds. Thus suppose $q$ dies before $r$ and before either $q = \infty$ or $r = -\infty$. When $q$ dies, $q = s$ or $q$ is passive, and $B_A = \{\}$. Since $r$ is still live, $r$ is passive. By Lemma~\ref{lem:soft-search-passive}, $q \ge s > r$. The argument is symmetric if $r$ dies before $q$ and before either $q = \infty$ or $r = -\infty$. \qed
\end{proof}
\begin{theorem} Soft-threshold search is correct.
\end{theorem}
\begin{proof} Let $q$ and $r$ be defined as in Lemma~\ref{lem:soft-search-live}. By that lemma, the search will continue until a cycle is detected or $q > r$. While the search continues, it traverses arcs in exactly the same way as vertex-guided search. Once $q > r$, the continuation test for vertex-guided search fails. If the graph is still acyclic, the continuation test for soft-threshold search may not fail immediately, but no additional arcs can be traversed; any additional iterations of the while loop merely change the state (active, passive, or dead) of various vertices. Such changes do not affect the outcome of the search. \qed
\end{proof}
To implement soft-threshold search, we maintain $F_A$, $F_P$, $B_A$, and $B_P$ as doubly-linked lists. The time per search step is $\mathrm{O}(1)$, not counting the computations associated with a change in $s$ (the two code blocks at the end of the while loop that are executed if $F_A$ or $B_A$ is empty, respectively).
The remaining freedom in the algorithm is the choice of $s$. The following observation guides this choice. Suppose $s$ changes because $F_A$ is empty. The algorithm chooses a new $s$ from $F_P$ and makes active all vertices in $F_P$ that are no greater than $s$. Consider the next change in $s$. If this change occurs because $F_A$ is again empty, then all the vertices that were made active by the first change of $s$, including $s$, are dead, and hence can never become active again. If, on the other hand, this change occurs because $B_A$ is empty, then all the forward vertices that remained passive after the first change in $s$ become dead, and $s$ becomes dead if it is not dead already. That is, either the vertices in $F_P$ no greater than the new $s$, or the vertices in $F_P$ no less than the new $s$, are dead after the next change in $s$. Symmetrically, if $s$ changes because $B_A$ is empty, then either all the vertices in $B_P$ no less than the new $s$, or all the vertices in $B_P$ no greater than the new $s$, are dead after the next change in $s$. To minimize the worst case, we always select $s$ to be the median of the set of choices. This takes time linear in the number of choices~\cite{Blum1973,Schonhage1976,DorZ1999}.
\begin{theorem} \label{thm:soft-search-time} If each new $s$ is selected to be the median of the set of choices, soft-threshold search takes $\mathrm{O}(m^{3/2})$ time over $m$ arc additions.
\end{theorem}
\begin{proof} Consider a soft-threshold search. For each increase in $s$ we charge an amount equal to the number of vertices in $F_P$ when the change occurs; for each decrease in $s$ we charge an amount equal to the number of vertices in $B_P$ when the change occurs. The charge covers the time spent in the code block associated with the change, including the time to find the new $s$ (the median) and the time to make vertices passive or dead, all of which is linear in the charge. The charge also covers any time spent later to make passive any vertices that became active as a result of the change; this time is $\mathrm{O}(1)$ for each such vertex. The remainder of the search time is $\mathrm{O}(1)$ for initialization plus $\mathrm{O}(1)$ per arc traversal. We claim that the total charge is $\mathrm{O}(1)$ per arc traversal. The theorem follows from the claim and Theorem~\ref{thm:2way-search-arcs}.
The number of vertices in $F \cup B$ is at most the number of arc traversals. We divide the total charge among these vertices, at most two units per vertex. The claim follows.
Consider a change in $s$ other than the last. Suppose this is an increase. Let $k$ be the number of vertices in $F_P$ when this change occurs; $k$ is also the charge for the change. Since $s$ is selected to be the median of $F_P$, at least $\lceil k/2 \rceil$ vertices in $F_P$ are no greater than $s$, and at least $\lceil k/2 \rceil$ vertices in $F_P$ are no less than $s$. If the next change in $s$ is an increase, all the vertices in $F_P$ no greater than $s$ must be dead by the time of the next change. If the next change in $s$ is a decrease, all the vertices in $F_P$ no less than $s$ will be made dead by the next change, including $s$ if it is not dead already. In either case we associate the charge of $k$ with the at least $\lceil k/2 \rceil$ vertices that become dead after the change in $s$ but before or during the next change in $s$.
A symmetric argument applies if $s$ decreases. The charge for the last change in $s$ we associate with the remaining live vertices, at most one unit per vertex. \qed
\end{proof}
Theorem~\ref{thm:soft-search-time} holds (with a bigger constant factor) if each new $s$ is an approximate median of the set of choices; that is, if $s$ is larger than $\epsilon k$ and smaller than $\epsilon k$ of the $k$ choices, for some fixed $\epsilon > 0$. An alternative randomized method is to select each new $s$ uniformly at random from among the choices.
\begin{theorem} \label{thm:soft-search-rtime} If each new $s$ is chosen uniformly at random from among the set of choices, soft-threshold search takes $\mathrm{O}(m^{3/2})$ expected time over $m$ arc additions.
\end{theorem}
\begin{proof} Each selection of $s$ takes time linear in the number of choices. We charge for the changes in $s$ exactly as in the proof of Theorem~\ref{thm:soft-search-time}. The search time is then $\mathrm{O}(1)$ plus $\mathrm{O}(1)$ per arc traversal plus $\mathrm{O}(1)$ per unit of charge. We shall show that the expected total charge for a search is at most linear in the number of vertices in $F \cup B$, which in turn is at most the number of arc traversals. The theorem follows from the bound on expected total charge and Theorem~\ref{thm:2way-search-arcs}.
The analysis of the expected total charge is much like the analysis~\cite{Knuth1972} of Hoare's ``quick select'' algorithm~\cite{Hoare1961}. We construct an appropriate recurrence and prove a linear bound by induction. Consider the situation just before some search step. Let $\mathrm{E}(k)$ be the maximum expected total future charge, given that at most $k$ distinct vertices are candidates for $s$ during future changes of $s$. (A vertex can be a candidate more than once, but we only count it once.) The maximum is over the possible current states of all the data structures; the expectation is over future choices of $s$. We prove by induction on $k$ that $\mathrm{E}(k) \le 4k$.
If $s$ does not change in the future, or if the next change in $s$ is the last one, then the total future charge is at most $k$. Suppose the next change of $s$ is not the last, and the next choice of $s$ is from among $j$ candidates. Each of these $j$ candidates is selected with probability $1/j$. If the new $s$ is the $i^{\text{th}}$ smallest among the candidates, then at least $\min\{i, j - i + 1\}$ of these candidates cannot be future candidates. The charge for this change in $s$ is $j$. The maximum expected future charge, including that for this change in $s$, is at most $j + \sum_{i=1}^{j/2} (2\mathrm{E}(k - i)/j)$ if $j$ is even, at most $j + \mathrm{E}(k - \lceil j/2 \rceil)/j + \sum_{i=1}^{\lfloor j/2 \rfloor} (2\mathrm{E}(k - i)/j)$ if $j$ is odd. Using the induction hypothesis $\mathrm{E}(k') \le 4k'$ for $k' < k$, we find that the maximum expected future charge is at most $j + \sum_{i=1}^{j/2} (8(k - i)/j) = 4k + j - \sum_{i=1}^{j/2} (8i/j) = 4k + j - (4/j)(j/2)(j/2+1) = 4k - 2$ if $j$ is even, at most $j + 4(k - \lceil j/2 \rceil)/j + \sum_{i=1}^{\lfloor j/2 \rfloor} (8(k - i)/j) = 4k + j - 4\lceil j/2 \rceil / j - \sum_{i=1}^{\lfloor j/2 \rfloor} (8i/j) = 4k + j - (4/j)(j/2 + 1/2 + (j/2-1/2)(j/2+1/2)) < 4k + j - (4/j)(j/2)^2 = 4k$ if $j$ is odd. By induction $\mathrm{E}(k) \le 4k$ for all $k$.
Over the entire search, there are at most $|F \cup B|$ candidates for $s$. It follows that the expected total charge over the entire search is at most $4|F
\cup B|$, which is at most four times the number of arcs traversed during the search. \qed
\end{proof}
Soft-threshold search with either method of choosing $s$ uses $\mathrm{O}(n + m)$ space, as do all the algorithms we have discussed so far. Katriel and Bodlaender~\cite{Katriel2006} give a set of examples on which soft-threshold search takes $\mathrm{\Omega}(m^{3/2})$ time no matter how $s$ is chosen, so the bounds in Theorems~\ref{thm:soft-search-time} and~\ref{thm:soft-search-rtime} are tight.
It is natural to ask whether there is a faster algorithm. To address this question, we consider algorithms that maintain (at least) an explicit list of the vertices in topological order and that do any needed reordering by moving one vertex at a time to a new position in this list. All known algorithms do this or can be modified to do so with at most a constant-factor increase in running time. We further restrict our attention to {\em local} algorithms, those that update the order after an arc $(v, w)$ with $v > w$ is added by reordering only affected vertices (defined in Section~\ref{sec:lim-search}: those vertices between $w$ and $v$, inclusive). These vertices form an interval in the old order and must form an interval in the new order; within the interval, any permutation is allowed as long as it restores topological order. Our algorithms, as well as all previous ones except for those of Shmueli~\cite{Shmueli1983} and Bender et al.~\cite{Bender2009}, are local. The following theorem gives a lower bound of $\mathrm{\Omega}(n\sqrt{m})$ on the worst-case number of vertices that must be moved by any local algorithm. Thus for sparse graphs ($m/n = \mathrm{O}(1)$), soft-threshold search is as fast as possible among local algorithms.
\begin{theorem} \label{thm:local-lb} Any local algorithm must reorder $\mathrm{\Omega}(n\sqrt{m})$ vertices, and hence must take $\mathrm{\Omega}(n\sqrt{m})$ time.
\end{theorem}
\begin{proof} Let $p$ and $k$ be arbitrary positive integers such that $p \le k$. We shall give an example with $n = p(k + 1)$ vertices and $m = n - k - 1 + k(k + 1)/2$ arcs that requires at least $pk(k + 1)/2 = nk/2$ vertex movements. Since $p \le k$, $k(k + 1)/2 \le m \le 3k(k + 1)/2$, so $\sqrt{m} = \mathrm{\Theta}(k)$. The example is such that, after $n - k - 1$ initial arc additions, each subsequent arc addition forces at least $p$ vertices to be moved in the topological order, assuming the algorithm is local. The total number of vertex movements is thus at least $pk(k+1)/ 2 = \mathrm{\Omega}(n\sqrt{m})$. Given any target number of vertices $n'$ and target number of arcs $m'$, we can choose $p$ and $k$ so that $n = \mathrm{\Theta}(n')$ and $m = \mathrm{\Theta}(m')$, which gives the theorem.
The construction is quite simple. Let the $n$ vertices be numbered 1 through $n$ in their original topological order. Add $n - k - 1$ arcs so that each interval of $p$ consecutive vertices ending in an integer multiple of $p$ forms a path of the vertices in increasing order (so that vertices 1 through $p$ form a path from 1 to $p$, $p + 1$ through $2p$ form a path from $p + 1$ to $2p$, and so on). Now there are $k + 1$ paths, each containing $p$ vertices. Call these paths $P_1, P_2,\ldots,P_{k + 1}$, in increasing order by first (and last) vertex. Add an arc from the last vertex of $P_2$ (vertex $2p$) to the first vertex of $P_1$ (vertex 1). This forms a path from $p + 1$ through $p + 2, p + 3,\ldots$ to $2p$, then through $1, 2, \ldots$ to $p$. The affected vertices are the vertices 1 through $2p$, and the only way to rearrange them to restore topological order is to move $p + 1$ through $2p$ before 1 through $p$, which takes at least $p$ individual vertex moves. The effect is to swap $P_1$ and $P_2$ in the topological order. Now add an arc from the last vertex of $P_3$ to the first vertex of $P_1$. This forces $P_1$ to swap places with $P_3$, again requiring at least $p$ vertex moves. Continue adding one arc at a time in this way, forcing $P_1$ to swap places with $P_4, P_5,\ldots, P_{k + 1}$. After $k$ arcs additions of arcs from the last vertex of $P_2, P_3,\ldots, P_{k + 1}$ to the first vertex of $P_1$, path $P_1$ has been forced all the way to the top end of the topological order. Now ignore $P_1$ and repeat the construction with $P_2$, forcing it to move past $P_3, P_4,\ldots, P_{k + 1}$ by adding arcs $(3p, p + 1), (4p, p + 1),\ldots, ((k + 1)p, p + 1)$. Do the same with $P_3, P_4,\ldots, P_k$. The total number of arcs added that force vertex moves is $k(k + 1)/2$. Each of these added arcs forces at least $k$ vertex moves. Figure~\ref{fig:soft-lb} gives an example of the construction. \qed
\end{proof}
\begin{figure}
\caption{The $\mathrm{\Omega}(nm^{1/2})$ vertex reordering construction for $p = 3$ and $k = 3$, yielding an example with $n = 12$ vertices and $m = 14$ arcs. (a)
Insertion of arc $(6, 1)$ moves 1, 2, 3 past 4, 5, 6. Insertion of $(9, 1)$
moves 1, 2, 3 past 7, 8, 9. Insertion of $(12, 1)$ moves 1, 2, 3 past 10, 11,
12. (b) Insertion of $(9, 4)$ moves 4, 5, 6 past 7, 8, 9. Insertion of $(12,
4)$ moves 4, 5, 6 past 10, 11, 12. (c) Insertion of $(12, 7)$ moves 7, 8, 9
past 10, 11, 12. (d) Final order. }
\label{fig:soft-lb}
\end{figure}
The $\mathrm{\Omega}(n\sqrt{m})$ bound on vertex reorderings is tight. An algorithm that achieves this bound is a two-way search that does not alternate forward and backward arc traversals but instead does forward arc traversals until visiting an unvisited vertex less than $v$, then does backward arc traversals until visiting an unvisited vertex greater than $w$, and repeats. Each forward traversal is along an arc $(u, x)$ with $u$ minimum; each backward traversal is along an arc $(y, z)$ with $z$ maximum. Searching continues until a cycle is detected or there is no compatible pair of untraversed arcs. If the search stops without detecting a cycle, the algorithm reorders the vertices in the same way as in two-way compatible search. One can prove that this method reorders $\mathrm{O}(n\sqrt{m})$ vertices over $m$ arc additions by counting related vertex pairs (as defined in the next section: two vertices are related if one path contains both). Unfortunately we do not know an implementation of this algorithm with an overall time bound approaching the bound on vertex reorderings.
For algorithms that reorder one vertex at a time but are allowed to move unaffected vertices, the only lower bound known is the much weaker one of Ramalingam and Reps~\cite{Ramalingam1994}. They showed that $n - 1$ arc additions can force any algorithm, local or not, to do $\mathrm{\Omega}(n\log n)$ vertex moves.
\section{Topological Search} \label{sec:top-search}
\SetKwFunction{topologicalsearch}{Topological-Search} \SetKwFunction{reorder}{Reorder}
Soft-threshold search is efficient on sparse graphs but becomes less and less efficient as the graph becomes denser; indeed, if $m = \mathrm{\Omega}(n^2)$ the time bound is $\mathrm{O}(n^3)$, the same as that of one-way limited search (Section \ref{sec:lim-search}). In this section we give an alternative algorithm that is efficient for dense graphs. The algorithm uses two-way search, but differs in three ways from the methods discussed in Sections \ref{sec:2way-search} and \ref{sec:soft-search}: it balances vertices visited instead of arcs traversed (as in the method sketched at the end of Section~\ref{sec:soft-search}); it searches the topological order instead of the graph; and it uses a different reordering method, which has the side benefit of making it a topological sorting algorithm. We call the algorithm {\em topological search}.
We represent the topological order by an explicit mapping between the vertices and the integers from 1 to $n$. We denote by $\mathit{position}(v)$ the number of vertex $v$ and by $\mathit{vertex}(i)$ the vertex with number $i$. We implement $\mathit{vertex}$ as an array. The initial numbering is arbitrary; it is topological since there are no arcs initially. If $v$ and $w$ are vertices, we test $v < w$ by comparing $\mathit{position}(v)$ to $\mathit{position}(w)$. We represent the graph by an adjacency matrix $A: A(v, w) = 1$ if $(v, w)$ is an arc, $A(v, w) = 0$ if not. Testing whether $(v, w)$ is an arc takes $\mathrm{O}(1)$ time, as does adding an arc. Direct representation of $A$ uses $\mathrm{O}(n^2)$ bits of space; representation of $A$ by a hash table reduces the space to $\mathrm{O}(n + m)$ but makes the algorithm randomized.
To simplify the running time analysis and the extension to strong component maintenance (Section \ref{sec:strong}), we test for cycles after the search. Thus the algorithm consists of three parts: the search, the cycle test, and the vertex reordering. Let $(v, w)$ be a new arc with $v > w$. The search examines every affected vertex (those between $w$ and $v$ in the order, inclusive). It builds a queue $F$ of vertices reachable from $w$ by searching forward from $w$, using a current position $i$, initially $\mathit{position}(w)$. Concurrently, it builds a queue $B$ of vertices from which $v$ is reachable by searching backward from $v$, using a current position $j$, initially $\mathit{position}(v)$. It alternates between adding a vertex to $F$ and adding a vertex to $B$ until the forward and backward searches meet. When adding a vertex $z$ to $F$ or $B$, the method sets $\mathit{vertex}(\mathit{position}(z)) = \textit{null}$.
In giving the details of this method, we use the following notation for queue operations: $[\,]$ denotes an empty queue; $\mathit{inject}(x, Q)$ adds element $x$ to the back of queue $Q$; $\mathit{pop}(Q)$ deletes the front element $x$ from queue $Q$ and returns $x$; if $Q$ is empty, $\mathit{pop}(Q)$ leaves $Q$ empty and returns null. Do the search by calling \topologicalsearch{$v$,$w$}, where procedure \topologicalsearch is defined in Figure~\ref{alg:top-search}.
\begin{figure}
\caption{Implementation of topological search.}
\label{alg:top-search}
\end{figure}
Once the search finishes, test for a cycle by checking whether there is an arc $(u, z)$ with $u$ in $F$ and $z$ in $B$. If there is no such arc, reorder the vertices as follows. Let $F$ and $B$ be the queues at the end of the search, and let $k$ be the common value of $i$ and $j$ at the end of the search. Then $\mathit{vertex}(k) = \textit{null}$. If the search stopped after incrementing $i$, then $\mathit{vertex}(k)$ was added to $B$, and $F$ and $B$ contain the same number of vertices. Otherwise, the search stopped after decrementing $j$, $\mathit{vertex}(k)$ was added to $F$, and $F$ contains one more vertex than $B$. In either case, the number of positions
$g \ge k$ such that $\mathit{vertex}(g) = \textit{null}$ is $|F|$, and the number of positions $g
< k$ such that $\mathit{vertex}(g) = \textit{null}$ is $|B|$. Reinsert the vertices in $F \cup B$ into the vertex array, moving additional vertices as necessary, by calling \reorder, using as the initial values of $F$, $B$, $i$, $j$ their values at the end of the search, where procedure \reorder is defined in Figure~\ref{alg:reorder}.
\begin{figure}
\caption{Implementation of reordering.}
\label{alg:reorder}
\end{figure}
The reordering process consists of two almost-symmetric while loops. The first loop reinserts the vertices in $F$ into positions $k$ and higher. Variable $i$ is the current position. If $\mathit{vertex}(i)$ is a vertex $q$ with an arc from a vertex currently in $F$, vertex $q$ is added to the back of $F$ and $\mathit{vertex}(i)$ becomes null: vertex $q$ must be moved to a higher position. If $\mathit{vertex}(i)$ becomes null, or if $\mathit{vertex}(i)$ was already null, the front vertex in $F$ is deleted from $F$ and becomes $\mathit{vertex}(i)$. The second loop reinserts the vertices in $B$ into positions $k - 1$ and lower in symmetric fashion. The only difference between the loops is that the forward loop increments $i$ last, whereas the backward loop decrements $j$ first, to avoid examining $\mathit{vertex}(k)$. The forward and backward loops are completely independent and can be executed in parallel. (This is not true of the forward and backward searches.) Figure~\ref{fig:top-search} gives an example of topological search and reordering.
\begin{figure}
\caption{ Topological search and reordering of the graph in Figure~\ref{fig:lim-search}. (a) Initially positions 3 and 11, of $w$ and $v$, respectively, become empty, $F = [w]$, $B = [v]$. The search adds $c$ to $F$, $g$ to $B$, $f$ to $F$, and stops with $i = j = 7$. (b) Forward reordering begins from position 7. Vertex $w$ drops into position 7, $c$ drops into position 8. Vertex $h$ in position 9 has an arc from $f$, still in $F$: $h$ is added to $F$, $f$ drops into position 9. Vertex $i$ in position 10 has no arc from any vertex still in $F$. Vertex $h$ drops into position 11. (c) Backward reordering begins from position 6. Vertex $e$ has an arc to $g$; $e$ is added to $B$ and replaced by $v$. Vertices $g$ and $e$ drop into positions 4 and 3, respectively. (d) Final order. Forward and backward reordering are independent and can be done in either order or concurrently. }
\label{fig:top-search}
\end{figure}
\begin{theorem} \label{thm:top-corr} Topological search is correct.
\end{theorem}
\begin{proof} Let $(v, w)$ be a new arc such that $v > w$. The search maintains the invariant that every vertex in $F$ is reachable from $w$ and $v$ is reachable from every vertex in $B$. Thus if there is an arc $(u, z)$ with $u$ in $F$ and $z$ in $B$, there is a cycle. Suppose the addition of $(v, w)$ creates a cycle. The cycle consists of $(v, w)$ and a path $P$ from $w$ to $v$ of vertices in increasing order. Let $u$ be the largest vertex on $P$ that is in $F$ at the end of the search. Since $u \ne v$, there is an arc $(u, z)$ on $P$. Vertex $z$ must be in $B$, or the search would not have stopped. We conclude that the algorithm reports a cycle if and only if the addition of $(v, w)$ creates one.
Suppose the addition of $(v, w)$ does not create a cycle. When the search stops, the number of positions $g \ge i$ such that $\mathit{vertex}(g) = \textit{null}$ is $|F|$. The forward reordering loop maintains this invariant as it updates $F$. It also maintains the invariant that once position $i$ is processed, every position from $k$ to $i$, inclusive, is non-null. Thus if $i = n + 1$, $F$ must be empty, and the loop terminates. Symmetrically, the backward reordering loop terminates before $j$ can become 0. Thus all vertices in $F \cup B$ at the end of the search are successfully reordered; some other vertices may also be reordered. Let $\overline{F}$ and $\overline{B}$ be the sets of vertices added to $F$ and to $B$, respectively, during the search and reordering. Vertices in $\overline{F}$ move to higher positions, vertices in $\overline{B}$ move to lower positions, and no other vertices move.
All vertices in $\overline{F}$ are reachable from $w$, and $v$ is reachable from all vertices in $\overline{B}$. We show by case analysis that after the reordering every arc $(x, y)$ has $x < y$. There are five cases, of which two pairs are symmetric. Suppose $x$ and $y$ are both in $\overline{F} \cup \overline{B}$. Since there is no cycle, it cannot be the case that $x$ is in $\overline{F}$ and $y$ is in $\overline{B}$. The reordering moves all vertices in $\overline{F}$ after all vertices in $\overline{B}$ without changing the order of vertices in $\overline{F}$ and without changing the order of vertices in $\overline{B}$. It follows that $x < y$ after the reordering. This includes the case $(x, y) = (v, w)$, since $w$ is in $\overline{F}$ and $v$ is in $\overline{B}$. Suppose $y$ is in $\overline{F}$ and $x$ is not in $\overline{F} \cup \overline{B}$. The reordering does not move $x$ and moves $y$ higher in the order, so $x < y$ after the reordering. The case of $x$ in $\overline{B}$ and $y$ not in $\overline{F} \cup \overline{B}$ is symmetric. Suppose $x$ is in $\overline{F}$ and $y$ is not in $\overline{F} \cup \overline{B}$. Since $x < y$ before the reordering, the first loop of the reordering must reinsert $x$ before it reaches the position of $y$; otherwise $y$ would be in $\overline{F}$. Thus $x < y$ after the reordering. The case $y$ in $\overline{B}$ and $x$ not in $\overline{F} \cup \overline{B}$ is symmetric. \qed
\end{proof}
To bound the running time of topological search, we extend the concept of relatedness to vertex pairs. We say two vertices are {\em related} if they are on a common path. Relatedness is symmetric; order on the path does not matter.
\begin{lemma} \label{lem:top-cycle-time} Over $m$ arc additions, topological search spends $\mathrm{O}(n^2)$ time testing for cycles.
\end{lemma}
\begin{proof} Suppose addition of an arc $(v, w)$ triggers a search. Let $F$ and $B$ be the values of the corresponding variables at the end of the search. The test for cycles takes
$\mathrm{O}(|F||B|)$ time. If this is the last arc addition, the test takes $\mathrm{O}(n^2)$ time. Each earlier addition does not create a cycle; for such an addition, each pair $x$ in $F$ and $y$ in $B$ is related after the addition but not before: before the reordering $x < y$, so if $x$ and $y$ were related there would be a path from $x$ to $y$, and the addition of $(v, w)$ would create a cycle, consisting of a path from $w$ to $x$, the path from $x$ to $y$, a path from $y$ to $v$, and arc $(v,w)$. Since there are at most ${n \choose 2}$ related vertex pairs, the time for all cycle tests other than the last is $\mathrm{O}(n^2)$. \qed
\end{proof}
For each move of a vertex during reordering, we define the {\em distance} of the move to be the absolute value of the difference between the positions of the vertex in the old and new orders.
\begin{lemma} \label{lem:sum-distances} Over all arc additions, except the last one if it creates a cycle, the time spent by topological search doing search and reordering is at most a constant times the sum of the distances of all the vertex moves.
\end{lemma}
\begin{proof} Consider an arc addition that triggers a search and reordering. Consider a vertex $q$ that is moved to a higher position; that is, it is added to $F$ during either the search or the reordering and eventually placed in a new position during the reordering. Let $i_1$ be its position before the reordering and $i_2$ its position after the reordering. When $q$ is added to $F$, $i = i_1$; when $q$ is removed from $F$, $i = i_2$. For each value of $i$ greater than $i_1$ and no greater than $i_2$, there may be a test for the existence of an arc $(q, \mathit{vertex}(i))$: such a test can occur during forward search or forward reordering but not both. The number of such tests is thus at most $i_2 - i_1$, which is the distance $q$ moves. A symmetric argument applies to a vertex moved to a lower position. Every test for an arc is covered by one of these two cases. Thus the number of arc tests is at most the sum of the distances of vertex moves. The total time spent in search and reordering is $\mathrm{O}(1)$ per increment of $i$, per decrement of $j$, and per arc test. For each increment of $i$ or decrement of $j$ there is either an arc test or an insertion of a vertex into its final position. The number of such insertions is at most one per vertex moved. The lemma follows. \qed
\end{proof}
It remains to analyze the sum of the distances of the vertex moves. To simplify the analysis, we decompose the moves into pairwise swaps of vertices. Consider sorting a permutation of $1$ through $n$ by doing a sequence of pairwise swaps of out-of-order elements. The {\em distance} of a swap is twice the absolute value of the difference between the positions of the swapped elements; the factor of two accounts for the two elements that move. The sequence of swaps is {\em proper} if, once a pair is swapped, no later swap reverses its order.
Consider the behavior of topological search over a sequence of arc additions, excluding the last one if it creates a cycle. Identify the vertices with their final positions. Then the topological order is a permutation, and the final permutation is sorted.
\begin{lemma} \label{lem:proper-swaps} There is a proper sequence of vertex swaps whose total distance equals the sum of the distances of all the reordering moves.
\end{lemma}
\begin{proof} Consider an arc addition that triggers a search and reordering. As in the proof of Theorem~\ref{thm:top-corr}, let $\overline{F}$ and $\overline{B}$ be the sets of vertices added to $F$ and to $B$, respectively, during the search and reordering. Consider the positions of the vertices in $\overline{F} \cup \overline{B}$ before and after the reordering. After the reordering, these positions from lowest to highest are occupied by the vertices in $\overline{B}$ in their original order, followed by the vertices in $\overline{F}$ in their original order. We describe a sequence of swaps that moves the vertices in $\overline{F} \cup \overline{B}$ from their positions before the reordering to their positions after the reordering. Given the outcome of the swaps so far, the next swap is of any two vertices $x$ in $\overline{F}$ and $y$ in $\overline{B}$ such that $x$ is in a smaller position than $y$ and no vertex in $\overline{F} \cup \overline{B}$ is in a position between that of $x$ and that of $y$. The swap of $x$ and $y$ moves $x$ higher, moves $y$ lower, and preserves the order of the vertices in $\overline{F}$ as well as the order of the vertices in $\overline{B}$. If no swap is possible, all vertices in $\overline{F}$ must follow all vertices in $\overline{B}$, and since swaps preserve the order within $\overline{F}$ and within $\overline{B}$ the vertices are now in their positions after the reordering. Only a finite number of swaps can occur, since each vertex can only move a finite distance (higher for a vertex in $\overline{F}$, lower for a vertex in $\overline{B}$). The total distance of the moves of the vertices in $\overline{F}$ is exactly half the distance of the swaps, as is the total distance of the moves of the vertices in $\overline{B}$. Any particular pair of vertices is swapped at most once. Repeat this construction for each arc addition. If an arc addition causes a swap of $x$ and $y$, with $x$ moving higher and $y$ moving lower, then the arc addition creates a path from $y$ to $x$, and no later arc addition can cause a swap of $x$ and $y$. Thus the swap sequence is proper. \qed
\end{proof}
The following lemma was proved by Ajwani et al. \cite{Ajwani2006} as part of the analysis of their $\mathrm{O}(n^{11/4})$-time algorithm. Their proof uses a linear program. We give a combinatorial argument.
\begin{lemma} \label{lem:swap-distance} {\em \cite{Ajwani2006}} Given an initial permutation of $1$ through $n$, any proper sequence of swaps has total distance $\mathrm{O}(n^{5/2})$.
\end{lemma}
\begin{proof} If $\mathrm{\Pi}$ is a permutation of 1 to $n$, we denote by $\mathrm{\Pi}(i)$ the $i^{\text{th}}$ element of $\mathrm{\Pi}$. We define the {\em potential} of $\mathrm{\Pi}$ to be $\sum_{i<j} (\mathrm{\Pi}(i) - \mathrm{\Pi}(j))$. The potential is always between $-n^3$ and $n^3$. We compute the change in potential caused by a swap in a proper swap sequence. Let $\mathrm{\Pi}$ be the permutation before the swap, and let $i < j$ be the positions in $\mathrm{\Pi}$ of the pair of elements ($\mathrm{\Pi}(i)$ and $\mathrm{\Pi}(j)$) that are swapped. The distance $d$ of the swap is $2(j - i)$. Since the swap sequence is proper, $\mathrm{\Pi}(i) > \mathrm{\Pi}(j)$. Swapping $\mathrm{\Pi}(i)$ and $\mathrm{\Pi}(j)$ reduces the contribution to the potential of the pair $i,j$ by $2(\mathrm{\Pi}(i) - \mathrm{\Pi}(j))$. The swap also changes the contributions to the potential of pairs other than $i,j$, specifically those pairs exactly one of whose elements is $i$ or $j$. We consider three cases for the other element of the pair, say $k$. If $k < i$, the swap increases the contribution of $k,i$ by $\mathrm{\Pi}(i) - \mathrm{\Pi}(j)$ and decreases the contribution of $k,j$ by $\mathrm{\Pi}(i) - \mathrm{\Pi}(j)$, for a net change of zero. Similarly, if $j < k$, the swap decreases the contribution of $i,k$ by $\mathrm{\Pi}(i) - \mathrm{\Pi}(j)$ and increases the contribution of $j,k$ by $\mathrm{\Pi}(i) - \mathrm{\Pi}(j)$, for a net change of zero. More interesting is what happens if $i < k < j$. In this case the swap decreases the contribution of both $i,k$ and $k,j$ by $\mathrm{\Pi}(i) - \mathrm{\Pi}(j)$. There are $j - i - 1$ such values of $k$. Summing over all pairs, we find that the swap decreases the potential of the permutation by $2(\mathrm{\Pi}(i) - \mathrm{\Pi}(j))( 1 + j - i - 1) = d(\mathrm{\Pi}(i) - \mathrm{\Pi}(j))$.
Call a swap of $\mathrm{\Pi}(i)$ and $\mathrm{\Pi}(j)$ {\em small} if $\mathrm{\Pi}(i) - \mathrm{\Pi}(j) < \sqrt{n}$ and {\em big} otherwise. Because the swap sequence is proper, a given pair can be swapped at most once. Thus there are $\mathrm{O}(n^{3/2})$ small swaps. Each has distance at most $2(n - 1)$, so the sum of the distances of all small swaps is $\mathrm{O}(n^{5/2})$. A big swap of distance $d$ reduces the potential by at least $d\sqrt{n}$. Since the sum of the potential decreases over all swaps is $\mathrm{O}(n^3)$, the sum of the distances of all big swaps is $\mathrm{O}(n^{5/2})$. \qed
\end{proof}
The proof of Lemma \ref{lem:swap-distance} does not require that the swap sequence be proper; it suffices that every swap is of an out-of-order pair and no pair of elements is swapped more than once. The lemma may even be true if all swaps are of out-of-order pairs with some pairs swapped repeatedly, but our proof fails in this case, because our bound on the distance of the small swaps requires that there be $\mathrm{O}(n^{3/2})$ of them.
\begin{theorem} \label{thm:top-search-time} Over $m$ arc additions, topological search spends $\mathrm{O}(n^{5/2})$ time.
\end{theorem}
\begin{proof} Topological search spends $\mathrm{O}(n^2)$ time on the last arc addition. By Lemmas~\ref{lem:top-cycle-time}-\ref{lem:swap-distance}, it spends $\mathrm{O}(n^{5/2})$ on all the rest. \qed
\end{proof}
The bound of Theorem~\ref{thm:top-search-time} may be far from tight. In the remainder of this section we discuss lower bounds on the running time of topological search, and we speculate on improving the upper bound.
Katriel~\cite{Katriel2004} showed that any topological sorting algorithm that is local (as defined in Section~\ref{sec:lim-search}: the algorithm reorders only affected vertices) must do $\mathrm{\Omega}(n^2)$ vertex renumberings on a sequence of arc additions that form a path. This bound is $\mathrm{\Omega}(n)$ amortized per arc on a graph of $\mathrm{O}(n)$ arcs. She also proved that the topological sorting algorithm of Pearce and Kelly~\cite{Pearce2006} does $\mathrm{O}(n^2)$ vertex renumberings. Since topological search is a local topological sorting algorithm, her lower bound applies to this algorithm. Her lower bound on vertex reorderings is tight for topological search, since a proper sequence of swaps contains at most ${n \choose 2}$ swaps, and each pair of reorderings corresponds to at least one swap.
To get a bigger lower bound, we must bound the total distance of vertex moves, not their number. Ajwani~\cite{AjwaniThesis} gave a result for a related problem that implies the following: on a sequence of arc additions that form a path, topological search can take $\mathrm{\Omega}(n^2\log n)$ time. We proved this result independently in our conference paper~\cite{Haeupler2008b}; our proof uses the same construction as Ajwani's proof. This bound is $\mathrm{\Omega}(n\log n)$ amortized per arc on a graph of $\mathrm{O}(n)$ arcs.
We do not know if Ajwani's bound is tight for graphs with $\mathrm{O}(n)$ arcs, but it is not tight for denser graphs. There is an interesting connection between the running time of topological search and the notorious $k$-levels problem of computational geometry. Uri Zwick~(private communication, 2009) pointed this out to us. The $k$-levels problem is the following: Consider the intersections of $n$ lines in the plane in general position: each intersection is of only two lines, and the intersections have distinct $x$-coordinates. An intersection is a {\em $k$-intersection} if there are exactly $k$ lines below it (and $n - k - 2$ lines above it). What is the maximum number of $k$-intersections as a function of $n$ and $k$? For our purposes it suffices to consider $n$ even and $k = n/2 - 1$. We call an intersection with $n/2 - 1$ lines below it a {\em halving intersection}. The current best upper and lower bounds on the maximum number of halving intersections are $\mathrm{O}(n^{4/3})$~\cite{Dey1998} and $\mathrm{\Omega}(n2^{\sqrt{2\lg n}}/\sqrt{\lg n})$ (~\cite{Nivasch2008}; see also~\cite{Toth2001}).
The relationship between the $k$-levels problem and our problem does not require that the lines be straight; it only requires that each pair intersect only once. Thus instead of a set of lines we consider a set of {\em pseudolines}, arbitrary continuous functions from the real numbers to the real numbers, each pair of which intersect at most once. Such a set is in {\em general position} if no point is common to three or more pseudolines, no two intersections of pseudolines have the same $x$-coordinate, and each intersection is a crossing intersection: if pseudolines $P$ and $Q$ intersect and $P$ is above $Q$ to the left of the intersection, then $Q$ is above $P$ to the right of the intersection. The best bounds on the number of halving intersections of $2n$ pseudolines in general position are $\mathrm{O}(n^{4/3})$ (~\cite{TamakiT2003}; see also~\cite{SharirS2003}) (the same as for lines) and $\mathrm{\Omega}(n2^{\sqrt{2\lg n}})$~\cite{Zwick2005}. The latter gives a lower bound of $\mathrm{\Omega}(n^2 2^{\sqrt{2\lg n}})$ on the worst-case running time of topological search, as we now show.
\begin{theorem} \label{thm:halving-lb} Let $n$ be even. On a graph of $3n/2$ vertices, topological search can spend $\mathrm{\Omega}(n)$ time per arc addition for at least $H(n)$ arc additions, where $H(n)$ is the maximum number of halving intersections of $n$ pseudolines in the plane in general position.
\end{theorem}
\begin{proof} Given a set of $n$ pseudolines with $H(n)$ halving intersections, we construct a sequence of $H(n)$ arc additions on a graph of $3n/2$ vertices on which topological search spends $\mathrm{\Omega}(n)$ time on each arc addition. Given such a set of pseudolines, choose a value $x_0$ of the $x$-coordinate sufficiently small that all the halving intersections have $x$-coordinates larger than $x_0$. Number the pseudolines from 1 to $n$ from highest to lowest $y$-coordinate at $x_0$, so that the pseudoline with the highest $y$-coordinate gets number 1 and the one with the lowest gets number $n$. Construct a graph with $3n/2$ vertices and an initial (arbitrary) topological order. Number the first $n/2$ vertices in order from $n$ down to $n/2 + 1$, and number the last $n/2$ vertices in order from $n/2$ down to 1, so that the first vertex gets number $n$, the $(n/2)^{\text{th}}$ gets number $n/2 + 1$, the middle $n/2$ get no number, the $(n + 1)^{\text{st}}$ gets number $n/2$, and the last gets number 1. These numbers are permanent and are a function only of the initial order. Identify vertices by their number. Process the halving intersections in order by $x$-coordinate. If the $k^{\text{th}}$ halving intersection is of pseudolines $i$ and $j$ with $i < j$, add an arc $(i, j)$ to the graph. To the left of the intersection, pseudoline $i$ is above pseudoline $j$; to the right of the intersection, pseudoline $j$ is above pseudoline $i$. Figure~\ref{fig:halving-lb} illustrates this construction.
\begin{figure}
\caption{(a) A set of $n = 8$ pseudolines with $H(n) = 7$ halving intersections. Although the pseudolines are straight in this example, in general they need not be. (b) The corresponding sequence of arc additions on a graph of $3n/2 = 12$ vertices on which topological search takes $\Omega(nH(n))$ time. The arc additions correspond to the halving intersections processed in increasing order by $x$-coordinate; only the first four arc additions are shown.}
\label{fig:halving-lb}
\end{figure}
Since each arc $(i, j)$ has $i < j$, the graph remains acyclic. Since two pseudolines have only one intersection, a given arc is added only once. Consider running topological search on this set of arc additions. We claim that each arc addition moves exactly one vertex from the last third of the topological order to the first third and vice-versa; the vertices in the middle third are never reordered. Each such arc addition takes $\mathrm{\Omega}(n)$ time, giving the theorem.
To verify the claim, we prove the following invariant on the number of arc additions: the set of vertices in the first or last third, respectively, of the topological order have the same numbers as the bottom or top half of the pseudolines, respectively. In particular, a halving intersection of two pseudolines $i, j$ with $i < j$ corresponds to a swap of vertices $i$ in the top third and $j$ in the bottom third, giving the claim.
Intersections that are not halving intersections preserve the invariant. Suppose the invariant is true just to the left of a halving intersection of pseudolines $i$ and $j$ with $i < j$. Just to the left of the intersection, pseudolines $i$ and $j$ are the $n/2$ and $n/2 + 1$ highest pseudolines, respectively. By the induction hypothesis, just before the addition of $(i, j)$ vertex $i$ is in the last third of the topological order and vertex $j$ is in the first third. Suppose that just before the addition of $(i, j)$ there is an arc $(j, k)$ with $k$ in the first third. Then $j < k$, but pseudoline $k$ is in the bottom half and hence must be below pseudoline $j$. This is impossible, since the existence of the arc $(j, k)$ implies that pseudoline $k$ crossed above pseudoline $j$ to the left of the intersection of $i$ and $j$. Thus there can be no such arc $(j, k)$. Symmetrically, there can be no arc $(k, i)$ with $k$ in the last third. It follows that the topological search triggered by the addition of $(i, j)$ will compute a set $F$ all of whose vertices except $j$ are in the last third and a set $B$ all of whose vertices except $i$ are in the first third. The subsequent reordering will move $i$ to the first third, move $j$ to the last third, and possibly reorder other vertices within the first and last thirds. Thus the invariant remains true after the addition of $(i, j)$. By induction the invariant holds, giving the claim, and the theorem. \qed
\end{proof}
\begin{corollary} There is a constant $c > 0$ such that, for all $n$, there is a sequence of arc additions on which topological search takes $cn^2 2^{\sqrt{2\lg n}}$ time.
\end{corollary}
Unfortunately the reduction in the proof of Theorem~\ref{thm:halving-lb} goes only one way. We have been unable to construct a reduction in the other direction, nor are we able to derive a better upper bound for topological search via the methods used to derive upper bounds on the number of halving intersections.
\section{Strong Components} \label{sec:strong}
All the known topological ordering algorithms can be extended to the maintenance of strong components with at most a constant-factor increase in running time. Pearce \cite{Pearce2005} and Pearce and Kelly \cite{Pearce2003b} sketch how to extend their algorithm and that of Marchetti-Spaccamela et al.~\cite{Marchetti1996} to strong component maintenance. Here we describe how to extend soft-threshold search and topological search. The techniques differ slightly for the two algorithms, since one algorithm is designed for the sparse case and the other for the dense case.
We formulate the problem as follows: Maintain the partition of the vertices defined by the strong components. For each strong component, maintain a {\em canonical vertex}. The canonical vertex represents the component; the algorithm is free to choose any vertex in the component to be the canonical vertex. Support the query $\mathit{find}(v)$, which returns the canonical vertex of the component containing vertex $v$. Maintain a list of the canonical vertices in a topological order of the corresponding components.
To represent the vertex partition, we use a disjoint set data structure~\cite{Tarjan1975,Tarjan1984}. This structure begins with the partition consisting of singletons and supports find queries and the operation $\mathit{unite}(x, y)$, which, given canonical vertices $x$ and $y$, forms the union of the sets containing $x$ and $y$ and makes $x$ the canonical vertex of the new set. If the sets are represented by trees, the finds are done using path compression, and the unites are done using union by rank, the amortized time per find is $\mathrm{O}(1)$ if the total time charged for the unites is $\mathrm{O}(n\log n)$~\cite{Tarjan1984}. (In fact, the time charged to the unites can be made much smaller, but this weak bound suffices for us.) Since searching and reordering take much more than $\mathrm{\Omega}(n\log n)$ time, we can treat the set operations as taking $\mathrm{O}(1)$ amortized time each.
To maintain strong components using soft-threshold search, we represent the graph by storing, for each canonical vertex $x$, a list of arcs out of its component, namely those arcs $(y, z)$ with $\mathit{find}(y) = x$, and a list of arcs into its component, namely those arcs $(y, z)$ with $find(z) = x$. This represents the graph of strong components, except that there may be multiple arcs between the same pair of strong components, and there may be loops, arcs whose ends are in the same component. When doing a search, we delete loops instead of traversing them. When the addition of an arc $(v, w)$ combines several components into one, we form the incoming list and the outgoing list of the new component by combining the incoming lists and outgoing lists, respectively, of the old components. This takes $\mathrm{O}(1)$ time per old component, if the incoming and outgoing lists are circular. Deletion of a loop takes $\mathrm{O}(1)$ time if the arc lists are doubly linked.
Henceforth we identify each strong component with its canonical vertex, and we abbreviate $\mathit{find}(x)$ by $f(x)$. If a new arc $(v, w)$ has $f(v) > f(w)$, do a soft-threshold search forward from $f(w)$ and backward from $f(v)$. During the search, do not stop when a forward arc traversal reaches a component in $B$ or when a backward arc traversal reaches a component in $F$. Instead, allow components to be in both $F$ and $B$. Once the search stops, form the new component, if any. Then reorder the canonical vertices and delete from the order those that are no longer canonical. Here are the details. When a new arc $(v, w)$ has $f(v) > f(w)$, do the search by calling \softthresholdsearch{$f(v)$,$f(w)$}, where \softthresholdsearch is defined as in Section~\ref{sec:soft-search} but with the macro \searchstep redefined as in Figure~\ref{alg:search-step-strong}. The new version of \searchstep is just like the old one except that it visits canonical vertices instead of all vertices, it uses circular instead of linear arc lists, and it does not do cycle detection: \softthresholdsearch terminates only when $F_A$ or $B_A$ is empty, and it always returns null.
\begin{figure}
\caption{Redefinition of \searchstep to find strong components using soft-threshold search.}
\label{alg:search-step-strong}
\end{figure}
Once the search finishes, let $t = \min(\{f(v)\} \cup \{x \in F| \mathit{out}(x) \ne \textit{null}\})$. Compute the sets $F_<$ and $B_>$. Find the new component, if any, by running a static linear-time strong components algorithm on the subgraph of the graph of strong components whose vertex set is $X = F_< \cup \{t\} \cup B_>$
and whose arc set is $Y = \{(f(u), f(x))|(u, x) \text{ is an arc with } f(u)
\text{ in } \linebreak[0] F_< \text{ and } f(u) \ne f(x)\} \cup \{(f(y), f(z))|(y, z) \text{ is an arc with } f(z) \in B_> \text{ and } f(y) \ne f(z)\}$. If a new component is found, combine the old components it contains into a new component with canonical vertex $v$.
Reorder the list of vertices in topological order by moving the vertices in $X - \{t\}$ as in Section~\ref{sec:2way-search}. Then delete from the list all vertices that are no longer canonical, namely the canonical vertices other than $f(v)$ of the old components contained in the new component.
{\em Remark}: Since the addition of $(v, w)$ can only form a single new component, running a strong components algorithm to find this component is overkill. A simpler alternative is to unmark all vertices in $X$ and then run a forward depth-first search from $f(w)$, traversing arcs in $Y$. During the search, mark vertices as follows: Mark $f(v)$ if it is reached. When retreating along an arc $(f(u), f(x))$, mark $f(u)$ if $f(x)$ is marked. At the end of the search, the marked vertices are the canonical vertices contained in the new component.
\begin{theorem} \label{thm:strong-soft-corr} Maintaining strong components via soft-threshold search is correct.
\end{theorem}
\begin{proof} By induction on the number of arc additions. Consider the graph of strong components just before an arc $(v, w)$ is added. This addition forms a new component if and only if $f(v) > f(w)$ and there is a path from $f(w)$ to $f(v)$. Furthermore the old components contained in the new component are exactly the components on paths from $f(w)$ to $f(v)$. The components on such a path are in increasing order, so the path consists of a sequence of one or more components in $F_<$, possibly $t$, and a sequence of one or more components in $B_>$. Each arc on such a path is in $Y$. It follows that the algorithm correctly forms the new component. If there is no new component, the reordering is exactly the same as in Section~\ref{sec:2way-search}, so it correctly restores topological order. Suppose there is a new component. Then certain old components are combined into one, and their canonical vertices other than $f(v)$ are deleted from the list of canonical vertices in topological order. We must show that the new order is topological. The argument in the proof of Theorem~\ref{thm:2way-corr} applies, except that there are some new possibilities. Consider an arc $(x, y)$ other than $(v, w)$. One of the cases in the proof of Theorem~\ref{thm:2way-corr} applies unless at least one of $x$ and $y$ is in the new component. If both are in the new component, then $(x, y)$ becomes a loop. Suppose just one, say $y$, is in the new component. Then $f(x)$ cannot be forward, or it would be in the new component. Either $f(x)$ is in $B_>$ or $f(x)$ is not in $X$; in either case, $f(x)$ precedes $f(v)$ after the reordering. The argument is symmetric if $x$ but not $y$ is in the new component. \qed
\end{proof}
To bound the running time of the strong components algorithm we need to extend Lemma~\ref{lem:2way-search-rel} and Theorem~\ref{thm:2way-search-arcs}.
\begin{lemma} \label{lem:strong-search-rel} Suppose the addition of $(v, w)$ triggers a search. Let $(u, x)$ and $(y, z)$, respectively, be arcs traversed forward and backward during the search, not necessarily during the same search step, such that $f(u) < f(z)$. Then either $(u, x)$ and $(y, z)$ are unrelated before the addition of $(v, w)$ but related afterward, or they are related before the addition and the addition makes them into loops.
\end{lemma}
\begin{proof} After $(v, w)$ is added, there is a path containing both of them, so they are related after the addition. If they were related before the addition, then there must be a path containing $(u, x)$ followed by $(y, z)$. After the addition there is a path from $z$ to $u$, so $u$, $x$, $y$, and $z$ are in the new component, and both $(u, x)$ and $(y, z)$ become loops. \qed
\end{proof}
\begin{theorem} \label{thm:strong-arcs} Over $m$ arc additions, the strong components algorithm does $\mathrm{O}(m^{3/2})$ arc traversals.
\end{theorem}
\begin{proof} Divide the arc traversals during a search into those of arcs that become loops as a result of the arc addition that triggered the search, and those that do not. Over all searches, there are at most $2m$ traversals of arcs that become loops: each such arc can be traversed both forward and backward. By Lemma~\ref{lem:strong-search-rel} and the proof of Theorem~\ref{thm:2way-search-arcs}, there are at most $4m^{3/2}$ traversals of arcs that do not become loops. \qed
\end{proof}
\begin{theorem} Maintaining strong components via soft-threshold search takes \linebreak $\mathrm{O}(m^{3/2})$ time over $m$ arc additions, worst-case if $s$ is always a median or approximate median of the set of choices, expected if $s$ is always chosen uniformly at random.
\end{theorem}
\begin{proof} Consider the addition of an arc $(v, w)$ such that $f(v) > f(w)$. Each search step either traverses two arcs or deletes one or two loops. An arc can only become a loop once and be deleted once, so the extra time for such events is $\mathrm{O}(m)$ over all arc additions. The arcs in $Y$ were traversed by the search, so the time to form the new component and to reorder the vertices is $\mathrm{O}(1)$ per arc traversal. The theorem follows from Theorem~\ref{thm:strong-arcs} and the proofs of Theorems~\ref{thm:soft-search-time} and~\ref{thm:soft-search-rtime}. \qed
\end{proof}
To maintain strong components via topological search, we represent the graph of strong components by an adjacency matrix $A$ with one row and one column per canonical vertex. If $x$ and $y$ are canonical vertices, $A(x, y) = 1$ if $x \ne y$ and there is an arc $(q, r)$ with $f(q) = x$ and $f(r) = y$; otherwise, $A(x, y) = 0$. We represent the topological order of components by an explicit numbering of the canonical vertices using consecutive integers starting from one. We also store the inverse of the numbering. If $x$ is a canonical vertex, $\mathit{position}(x)$ is its number; if $i$ is a vertex number, $\mathit{vertex}(i)$ is the canonical vertex with number $i$. Note that the matrix $A$ is indexed by vertex, {\em not} by vertex number; the numbers change too often to allow indexing by number.
To maintain strong components via topological search, initialize all entries of $A$ to zero. Add a new arc $(v, w)$ by setting $A(f(v), f(w)) = 1$. If $f(v) > f(w)$, search forward from $f(w)$ and backward from $f(v)$ by executing
\topologicalsearch{$f(v)$, $f(w)$} where \topologicalsearch is defined as in Section~\ref{sec:top-search}. Let $k$ be the common value of $i$ and $j$ when the search stops. After the search, find the vertex set of the new component, if any, by running a linear-time static strong components algorithm on the graph whose vertex set is $X = F \cup B$ and whose arc set is $Y = \{(x, y)|x \text{ and } y \text{ are in } F \cup B \text{ and } A(x, y) = 1\}$. Whether or not there is a new component, reorder the old canonical vertices exactly as in Section~\ref{sec:top-search}. Finally, if there is a new component, do the following: form its vertex set by combining the vertex sets of the old components contained in it. Let the canonical vertex of the new component be $\mathit{vertex}(k)$. Form a row and column of $A$ representing the arcs out of and into the new component by combining those of the old components contained in it. Delete from the topological order all the vertices that are no longer canonical. Number the remaining canonical vertices consecutively from 1.
{\em Remark}: As in soft-threshold search, using a static strong components algorithm to find the new component is overkill; a better method is the one described in the remark before Theorem~\ref{thm:strong-soft-corr}: run a forward depth-first search from $f(w)$, marking vertices when they are found to be in the new component.
\begin{theorem} \label{thm:strong-top-corr} Maintaining strong components via topological search is correct.
\end{theorem}
\begin{proof} By induction on the number of arc additions. Consider the addition of an arc $(v, w)$. Let $f$ and $f'$ be the canonical vertex function just before and just after this addition, respectively. The addition creates a new component if and only if $f(v) > f(w)$ and there is a path from $f(w)$ to $f(v)$. Suppose $f(v) > f(w)$ and let $F$ and $B$ be the values of the corresponding variables just after the search stops. Any path from $f(w)$ to $f(v)$ consists of a sequence of one or more vertices in $F$ followed by a sequence of one or more vertices in $B$. Each arc on such a path is in $Y$. It follows that the algorithm correctly finds the new component. If there is no new component, the algorithm reorders the vertices exactly as in Section~\ref{sec:top-search} and thus restores topological order. Suppose there is a new component. Let $k$ be the common value of $i$ and $j$ when the search stops. The reordering sets $\mathit{vertex}(k) = f(w)$. This vertex is the canonical vertex of the new component. Let $(x, y)$ be an arc. The same argument as in the proof of Theorem~\ref{thm:top-corr} shows that $f'(x) = f(x) < f(y) = f'(y)$ after the reordering unless $f(x)$ or $f(y)$ or both are in the new component. If both are in the new component, then $(x, y)$ is a loop after the addition of $(v, w)$. Suppose $f(x)$ but not $f(y)$ is in the new component. Then $f(x) \in F \cup B$. If $f(x) \in F$, then $\mathit{position}(f(y)) > k$ after the reordering but before the renumbering, so $f'(x) < f'(y)$ after the reordering and renumbering. If $f(x) \in B$, then $f(y) \notin B$, since otherwise $f(y)$ is in the component. It follows that $\mathit{position}(f(y)) > k$ before the reordering, and also after the reordering but before the renumbering, so $f'(x) < f'(y)$ after the reordering and renumbering. A symmetric argument applies if $f(y)$ but not $f(x)$ is in the new component. \qed
\end{proof}
\begin{theorem} Maintaining strong components via topological search takes $\mathrm{O}(n^{5/2})$ time over all arc additions.
\end{theorem}
\begin{proof} The time spent combining rows and columns of $A$ and renumbering vertices after deletion of non-canonical vertices is $\mathrm{O}(n)$ per deleted vertex, totaling
$\mathrm{O}(n^2)$ time over all arc additions. The time spent to find the new component after a search is $\mathrm{O}(|F| + |B|)^2 = \mathrm{O}(|F||B|)$ since $|B| \le |F| \le |B| + 1$, where $F$ and $B$ are the values of the respective variables at the end of the search. If $x$ is in $F$ and $y$ is in $B$, then either $x$ and $y$ are unrelated before the arc addition that triggered the search but related after it (and possibly in the same component), or they are related and in different components before the arc addition but in the same component after it. A given pair of vertices can become related at most once and can be combined into one component at most once. There are ${n \choose 2}$ vertex pairs. Combining these facts, we find that the total time spent to find new components is $\mathrm{O}(n^2)$.
To bound the rest of the computation time, we apply Theorem~\ref{thm:top-search-time}. To do this, we modify the strong components algorithm so that it does not delete non-canonical vertices from the topological order but leaves them in place. Such vertices have no incident arcs and are never moved again. This only makes the search and reordering time longer, since the revised algorithm examines non-canonical vertices during search and reordering, whereas the original algorithm does not. The proof of Theorem~\ref{thm:top-search-time} applies to the revised algorithm, giving a bound of $\mathrm{O}(n^{5/2})$ on the time for search and reordering. \qed
\end{proof}
\section{Remarks} \label{sec:remarks}
We are far from a complete understanding of the incremental topological ordering problem. Indeed, we do not even have a tight bound on the running time of topological search. Given the connection between this running time and the $k$-levels problem (see Section~\ref{sec:top-search}), getting a tighter bound seems a challenging problem. As mentioned in the introduction, Bender et al.~\cite{Bender2009} have proposed a completely different algorithm with a running time of $\mathrm{\Theta}(n^2\log n)$.
A more general problem is to find an algorithm that is efficient for any graph density. Our lower bound on the number of vertex reorderings is $\mathrm{\Omega}(nm^{1/2})$ for any local algorithm (see the end of Section \ref{sec:soft-search}); we conjecture that there is an algorithm with a matching running time, to within a polylogarithmic factor. For sparse graphs, soft-threshold search achieves this bound to within a constant factor. For dense graphs, the algorithm of Bender, Fineman, and Gilbert achieves it to within a logarithmic factor. For graphs of intermediate density, nothing interesting is known.
We have used total running time to measure efficiency. An alternative is to use an incremental competitive model \cite{Ramalingam1991}, in which the time spent to handle an arc addition is compared to the minimum work that must be done by any algorithm, given the same topological order and the same arc addition. The minimum work that must be done is the minimum number of vertices that must be reordered, which is the measure that Ramalingam and Reps used in their lower bound. (See the end of Section \ref{sec:soft-search}.) But no existing algorithm handles an arc addition in time polynomial in the minimum number of vertices that must be reordered. To obtain positive results, researchers have compared the performance of their algorithms to the minimum sum of degrees of reordered vertices \cite{Alpern1990}, or to a more-refined measure that counts out-degrees of forward vertices and in-degrees of backward vertices \cite{Pearce2006}. For these models, appropriately balanced forms of ordered search are competitive to within a logarithmic factor \cite{Alpern1990,Pearce2006}. In such a model, semi-ordered search is competitive to within a constant factor. We think, though, that these models are misleading: they ignore the possibility that different algorithms may maintain different topological orders, they do not account for the correlated effects of multiple arc additions, and good bounds have only been obtained for models that overcharge the adversary.
Alpern et al. \cite{Alpern1990} and Pearce and Kelly \cite{Pearce2007} studied batched arc additions as well as single ones. Pearce and Kelly give an algorithm that handles an addition of a batch of arcs in $\mathrm{O}(m')$ time, where $m'$ is the total number of arcs after the addition, and such that the total time for all arc additions is $\mathrm{O}(nm)$. Thus on each batch the algorithm has the same time bound as a static algorithm, and the overall time bound is that of the incremental algorithm of Marchetti-Spaccamela et al. \cite{Marchetti1996}.
This result is not surprising, because {\em any} incremental topological ordering algorithm can be modified so that each batch of arc additions takes $\mathrm{O}(m')$ time but the overall running time increases by at most a constant factor. The idea is to run a static algorithm concurrently with the incremental algorithm, each maintaining its own topological order. Here are the details. The incremental algorithm maintains a set of added arcs that have not yet been processed. Initially this set is empty. To handle a new batch of arcs, add them to the graph and to the set of arcs to be processed. Then start running a static algorithm; concurrently, resume the incremental algorithm on the expanded set of new arcs. The incremental algorithm deletes an arc at a time from this set and does the appropriate processing. Allocate time in equal amounts to the two algorithms. If the static algorithm stops before the incremental algorithm processes all the arcs, suspend the incremental algorithm and use the topological order computed by the static algorithm as the current order. If the incremental algorithm processes all the arcs, stop the static algorithm and use the topological order computed by the incremental algorithm as the current order. This algorithm runs a constant factor slower than the incremental algorithm and spends $\mathrm{O}(m')$ time on each batch of arcs.
For the special case of soft-threshold search, this method can be improved to maintain a single topological order, and to restart the incremental algorithm each time the static algorithm completes first. The time bound remains the same. If the static algorithm stops first, replace the topological order maintained by the incremental algorithm by the new one computed by the static algorithm, and empty the set of new arcs. These arcs do not need to be processed by the incremental algorithm. This works because the running time analysis of soft-threshold search does not use the current topological order, only the current graph, specifically the number of related arc pairs. Whether something similar works for topological search is open. Much more interesting would be an overall time bound based on the size and number of batches that is an improvement for cases other than one batch of $m$ arcs and $m$ batches of single arcs.
Alpern et al. \cite{Alpern1990} also allowed unrelated vertices to share a position in the order. More precisely, their algorithm maintains a numbering of the vertices such that if $(v, w)$ is an arc, $v$ has a lower number than $w$, but unrelated vertices may have the same number. This idea is exploited by Bender, Fineman, and Gilbert in their new algorithm.
\begin{acks}
{\normalsize The last author thanks Deepak Ajwani, whose presentation at the 2007 Data Structures Workshop in Bertinoro motivated the work described in Section \ref{sec:soft-search}, and Uri Zwick, who pointed out the connection between the running time of topological search and the $k$-levels problem. All the authors thank the anonymous referees for their insightful suggestions that considerably improved the presentation. }
\end{acks}
\end{document} | arXiv |
\begin{document}
\title{Euler cycles and Mennicke symbols} \section{Introduction} Let $R$ be a commutative Noetherian ring of (Krull) dimension $d\geq 2$. The group $E_{d+1}(R)$ (the subgroup of $SL_{d+1}(R)$ generated by the elementary matrices) acts on $Um_{d+1}(R)$, the set of unimodular rows of length $d+1$ over $R$. When $d=2$, Vaserstein \cite[Section 5]{sv} showed that the orbit space $Um_3(R)/E_3(R)$ carries the structure of an abelian group. Later, van der Kallen \cite{vk1} extended this result to show that $Um_{d+1}(R)/E_{d+1}(R)$ has an abelian group structure for all $d\geq 2$. This group structure is closely related with the higher Mennicke symbols of Suslin (see \cite{vk1} for an elaboration).
The group $Um_{d+1}(R)/E_{d+1}(R)$ is intimately related to the $d$\textsuperscript{\,th} Euler class group $E^d(R)$ studied by Bhatwadekar-Sridharan (see \cite{brs3,dz,vk3,vk4} for details on the connection between these two groups). The idea of the Euler class group was envisioned by Nori in order to detect the obstruction for a projective $R$-module of rank $d$ to split off a free summand of rank one. Although this \emph{``splitting problem"} was settled by Bhatwadekar-Sridharan quite sometime back in \cite{brs1,brs3}, surprisingly, the Euler class group has not yet lost its relevance. Very recently, in \cite{dtz2}, the current authors have succeeded in computing the structure of $Um_{d+1}(R)/E_{d+1}(R)$ for smooth affine ${\mathbb R}$-algebras by comparing this group with the Euler class group, and appealing to the structure theorems for $E^d(R)$ available in \cite{brs2} for such rings. To facilitate such a comparison, a set-theoretic map $\delta_R:E^d(R)\longrightarrow Um_{d+1}(R)/E_{d+1}(R)$ was defined in \cite{dtz2}, based on the formalism developed in \cite{dtz1}, when $R$ is a smooth affine domain of dimension $d$ over an infinite perfect field $k$ of characteristic unequal to $2$. If $k={\mathbb R}$, it was proved in \cite{dtz2} that $\delta_R$ is a morphism of groups but at that time it was not clear whether $\delta_R$ is a morphism in general. In this article we prove that $\delta_R$ is indeed a morphism of groups. We believe this morphism will enable further understanding of these two groups better, as it did in \cite{dtz2} when $k={\mathbb R}$. We must remark that in this article we (re)define $\delta_R$ in a much simpler manner than \cite{dtz2} (see Section \ref{maps} for details, and in particular, Remark \ref{original}).
In Sections \ref{ECG} and \ref{homotopy} we recall the definitions of the objects involved in this paper. In Section \ref{maps} we define the map $\delta_R:E^d(R)\longrightarrow Um_{d+1}(R)/E_{d+1}(R)$. In Section \ref{menn} we treat the special case when the group law in $Um_{d+1}(R)/E_{d+1}(R)$ is Mennicke-like (as this case is simpler and the treatment is entirely different) and prove that $\delta_R$ is a morphism. In Section \ref{general} we treat the general case.
\section{Generalities I: The Euler class group}\label{ECG}
\paragraph{\bf Notation} We shall write an ideal generated by $f_1,\cdots,f_{d}$ as $\langle f_1,\cdots,f_{d}\rangle$.
Let $R$ be a smooth affine domain of dimension $d\geq 2$ over an infinite perfect field $k$. Let $B$ be the set of pairs $(m,\omega_m)$ where $m$ is a maximal ideal of $R$ and $\omega_m :(R/m)^d\ra\!\!\!\ra m/m^2$. Let $G$ be the free abelian group generated by $B$. Let $J=m_{1}\cap \cdots \cap m_r$, where $m_i$ are distinct maximal ideals of $R$. Any $\omega_{J}:(R/J)^d\ra\!\!\!\ra J/J^2$ induces surjections $\omega_i :(R/m_i)^d\ra\!\!\!\ra m_i/m_i^2$ for each $i$. We associate $(J,\omega_J):= \sum_{1}^{r}(m_i,\omega_i)\in G$. Now,
Let $S$ be the set of elements $(J,\omega_J)$ of $G$ for which $\omega_J$ has a lift to a surjection $\theta: R^d\ra\!\!\!\ra J$ and $H$ be the subgroup of $G$ generated by $S$ . The Euler class group $E^d(R)$ is defined as $E^d(R):=G/H$.
\refstepcounter{theorem}\paragraph{{\bf Remark} \thetheorem} The above definition appears to be slightly different from the one given in \cite{brs1}. However, note that if $(J,\omega_J)\in S$ and if $\overline\sigma\in E_d(R/J)$, then the element $(J,\omega_J\overline\sigma)$ is also in $S$. For details, see \cite[Proposition 2.2]{dz}.
\begin{theorem}\label{zero}\cite[4.11]{brs1} Let $R$ be a smooth affine domain of dimension $d\geq 2$ over an infinite perfect field $k$. Let $J\subset R$ be a reduced ideal of height $d$ and $\omega_{J}:(R/J)^d\ra\!\!\!\ra J/J^2$ be a surjection. Then, the following are equivalent: \begin{enumerate} \item The image of $(J,\omega_J)=0$ in $E^d(R)$ \item $\omega_J$ can be lifted to a surjection $\theta: R^d\twoheadrightarrow J$. \end{enumerate} \end{theorem}
\refstepcounter{theorem}\paragraph{{\bf Remark} \thetheorem} We shall refer to the elements of the Euler class group as \emph{Euler cycles}. An arbitrary element of $E^d(R)$ can be represented by a single Euler cycle $(J,\omega_J)$, where $J$ is a reduced ideal of height $d$ and $\omega_J:(R/J)^d\twoheadrightarrow J/J^2$ is a surjection (see \cite[Remark 4.14]{brs1}).
The following notation will be used in the rest of this article.
\begin{notation} Let $\text{dim}(R)=d$. Let $(J,\omega_J)\in E^d(R)$ and $u\in R$ be a unit modulo $J$. Let $\sigma$ be any diagonal matrix in $GL_d(R/J)$ with determinant $\overline{u}$ (bar means modulo $J$). We shall denote the composite surjection $$(R/J)^d\stackrel{\sigma}\by \sim (R/J)^d\stackrel{\omega_J}\twoheadrightarrow J/J^2$$ by $\overline{u}\omega_J$. It is easy to check that the element $(J,\overline{u}\omega_J)\in E^d(R)$ is independent of $\sigma$ (the key fact used here is that $SL_{d}(R/J)=E_d(R/J)$ as $\text{dim}(R/J)=0$). \end{notation}
\section{Generalities II: Homotopy orbits}\label{homotopy} In this article, by \emph{`homotopy'} we shall mean \emph{`naive homotopy'}, as defined below.
\begin{definition} Let $F$ be a functor originating from the category of rings to the category of sets. For a given ring $R$, two elements $F(u_0),F(u_1)\in F(R)$ are said to be homotopic if there is an element $F(u(T))\in F(R[T])$ such that $F(u(0))=F(u_0)$ and $F(u(1))=F(u_1)$. \end{definition}
\begin{definition} Let $F$ be a functor from the category of rings to the category of sets. Let $R$ be a ring. Consider the equivalence relation on $F(R)$ \emph{generated by} homotopies (the relation is easily seen to be reflexive and symmetric but is not transitive in general). The set of equivalence classes will be denoted by $\pi_0(F(R))$. \end{definition}
\refstepcounter{theorem}\paragraph{{\bf Example} \thetheorem} Let $R$ be a ring. Two matrices $\sigma,\tau\in GL_n(R)$ are \emph{homotopic} if there is a matrix $\theta(T)\in GL_n(R[T])$ such that $\theta(0)=\sigma$ and $\theta(1)=\tau$. Of particular interest are the matrices in $GL_n(R)$ which are \emph{homotopic to identity}.
\begin{definition} Recall that $E_n(R)$ is the subgroup of $GL_n(R)$ generated by all
elementary matrices $E_{ij}(\lambda_{ij})$ (whose diagonal entries are all $1$, $i\not = j$, and $ij$-th entry is $\lambda_{ij}\in R$). \end{definition}
\refstepcounter{theorem}\paragraph{{\bf Remark} \thetheorem}\label{elem} Any $\theta\in E_n(R)$ is homotopic to identity. To see this, let $\theta=\prod E_{ij}(\lambda_{ij})$. Define $\Theta(T):= \prod E_{ij}(T\lambda_{ij})$. Then, clearly $\Theta(T)\in E_n(R[T])$ and we observe that $\Theta(1)=\theta$, $\Theta(0)=I_n$.
In this context, we record below a remarkable result of Vorst.
\begin{theorem}\label{vorst}\cite[Theorem 3.3]{v} Let $R$ be a regular ring which is essentially of finite type over a field $k$. Let $n\geq 3$ and $\theta(T)\in GL_n(R[T])$ be such that $\theta(0)=I_n$ ($\theta$ is thus a homotopy between $I_n$ and $\theta(1)\in GL_n(R)$). Then $\theta(T)\in E_n(R[T])$. \end{theorem}
\subsection{Homotopy orbits of unimodular rows}
For a ring $R$, consider the set
$$Um_{n+1}(R):=\{(a_1,\cdots,a_{n+1})\in R^{n+1}\,|\, \sum_{i=1}^{n+1} a_i b_i=1 \text{ for some } b_1,\cdots,b_{n+1}\in R\}$$ of \emph{unimodular rows} of length $n+1$ in $R$.
Two unimodular rows $(a_1,\cdots,a_{n+1})$ and $(a'_1,\cdots,a'_{n+1})$ are homotopic if there is $(f_1(T),\cdots,f_{n+1}(T))\in Um_{n+1}(R[T])$ such that $f_i(0)=a_i$ and $f_i(1)=a'_i$ for $i=1,\cdots,n+1$. The set of equivalence classes with respect to the equivalence relation generated by homotopies will be denoted by $\pi_0(Um_{n+1}(R))$.
We shall need the following theorem from \cite{dtz2} very soon. See also \cite[Theorem 2.1]{f1} for a more general version.
\begin{theorem}\label{uni}\cite[2.3]{dtz2} Let $R$ be a regular ring which is essentially of finite type over a field $k$. Then, for any $n\geq 2$ there is a bijection $\eta_R:\pi_0(Um_{n+1}(R))\stackrel{\sim}\longrightarrow Um_{n+1}(R)/E_{n+1}(R)$. \end{theorem}
\paragraph{\bf Notation} Let $v=(a_1,\cdots,a_{d+1})\in Um_{d+1}(R)$. The orbit of $v$ in $ Um_{d+1}(R)/E_{d+1}(R)$ will be written as $[v]=[a_1,\cdots,a_{d+1}]$.
\subsection{The pointed set $Q_{2n}(R)$ and its homotopy orbits:} Let $R$ be any commutative Noetherian ring. Let $n\geq 2$ and we recall the following set, which appeared in \cite{f}:
$$Q_{2n}(R)=\{(x_1,\cdots x_n,y_1,\cdots,y_n,z)\in R^{2n+1}\,|\,\sum_{i=1}^{n} x_i y_i=z-z^2\}.$$
By definition, elements $(x_1,\cdots x_n,y_1,\cdots,y_n,z)$ and $(x'_1,\cdots x'_n,y'_1,\cdots,y'_n,z')$ of $Q_{2n}(R)$ are homotopic if there is $(f_1,\cdots,f_n,g_1,\cdots,g_n,h)$ in $Q_{2n}(R[T])$ such that $f_i(0)=x_i$, $g_i(0)=y_i$, $h(0)=z$, and $f_i(1)=x'_i$, $g_i(1)=y'_i$, $h(1)=z'$. Consider the equivalence relation generated by homotopies on $Q_{2n}(R)$. The set of equivalence classes will be denoted by $\pi_0(Q_{2n}(R))$.
\section{Generalities III: The maps}\label{maps} Let $R$ be a smooth affine domain of dimension $d\geq 2$ over an infinite perfect field $k$. The purpose of this section is to define a set-theoretic map $\delta_R:E^d(R)\to Um_{d+1}(R)/E_{d+1}(R)$. This involves several steps.
\subsection{[The map $\theta_R:E^d(R)\longrightarrow\pi_{0}(Q_{2d}(R))$].} We first recall the definition of a set-theoretic map from the Euler class group $E^d(R)$ to $\pi_0(Q_{2d}(R))$ from \cite{dtz1}. By \cite[Remark 4.14]{brs1} we know that an arbitrary element of $E^d(R)$ can be represented by a single Euler cycle $(J,\omega_J)$, where $J$ is a reduced ideal of height $d$. Now $\omega_J:(R/J)^d\twoheadrightarrow J/J^2$ is given by $J=\langle a_1,\cdots,a_d\rangle+J^2$, for some $a_1,\cdots,a_d\in J$. Applying the Nakayama Lemma one obtains $s\in J^2$ such that $J=\langle a_1,\cdots,a_d,s\rangle$
with $s-s^2=a_1b_1+\cdots +a_d b_d$ for some $b_1,\cdots,b_d\in R$ (see \cite{mo1} for a proof). We associate to $(J,\omega_J)$ the homotopy class $[(a_1,\cdots,a_d,b_1,\cdots,b_d,s)]$ in $\pi_0(Q_{2d}(R))$.
In \cite[Proposition 4.2]{dtz1} we proved the following result. The reader may also consult \cite{af,mm} for a similar result proved using different methods than \cite{dtz1}.
\begin{proposition}\label{map1} Let $R$ be a regular domain of dimension $d\geq 2$ which is essentially of finite type over an infinite perfect field $k$. The association $(J,\omega_J)\mapsto [(a_1,\cdots,a_d,b_1,\cdots,b_d,s)]$ is well defined and gives rise to a set-theoretic map $\theta_d: E^d(R)\to \pi_0(Q_{2d}(R))$. The map $\theta_d$ takes the trivial Euler cycle to the homotopy orbit of the base point $(0,\cdots,0)$ of $Q_{2d}(R)$. \end{proposition}
\subsection{[The map $\zeta_R:\pi_{0}(Q_{2d}(R))\longrightarrow Um_{d+1}(R)/E_{d+1}(R)]$.} The map we are about to define will again be a set-theoretic map. For a homotopy orbit $[(x_1,\cdots x_d,y_1,\cdots,y_d,z)]\in \pi_{0}(Q_{2d}(R)$, we assign $$\zeta_R([(x_1,\cdots x_d,y_1,\cdots,y_d,z)]):=[x_1,\cdots x_d,1-2z]$$
\begin{proposition} $\zeta_R:\pi_{0}(Q_{2d}(R))\longrightarrow Um_{d+1}(R)/E_{d+1}(R)$ is well-defined. \end{proposition} \paragraph{Proof} We first note that $(x_1,\cdots,x_d,1-2z)\in Um_{d+1}(R)$.
By Thorem \ref{uni}, $\pi_0(Um_{d+1}(R))=Um_{d+1}(R)/E_{d+1}(R)$. Although the homotopy is not an equivalence relation on $Q_{2d}(R)$, to check that $\zeta_R$ is well-defined, it is enough to show that if $(x'_1,\cdots x'_d,y'_1,\cdots,y'_d,z')\in Q_{2d}(R)$ is homotopic to $(x_1,\cdots x_d,y_1,\cdots,y_d,z)$, then the unimodular rows $(x_1,\cdots x_d,1-2z)$ and $(x'_1,\cdots x'_d,1-2z')$ are homotopic. Let $(f_1,\cdots,f_d,g_1,\cdots,g_d,h)\in Q_{2d}(R[T])$ be such that $f_i(0)=x_i$, $g_i(0)=y_i$, $h(0)=z$, and $f_i(1)=x'_i$, $g_i(1)=y'_i$, $h(1)=z'$ ($1\leq i\leq d$). Clearly, $(f_1,\cdots,f_d,1-2h)\in Um_{d+1}(R[T])$ gives the desired homotopy between the unimodular rows $(x_1,\cdots x_d,1-2z)$ and $(x'_1,\cdots x'_d,1-2z')$. \qed
\subsection{[The map $\delta_R:E^d(R)\longrightarrow Um_{d+1}(R)/E_{d+1}(R)$].} Finally, the map $\delta_R$ is simply defined to be the composite: $$E^d(R)\stackrel{\theta_R}{\longrightarrow} \pi_0(Q_{2d}(R))\stackrel{\zeta_R}{\longrightarrow}Um_{d+1}(R)/E_{d+1}(R)$$
Let us summarize the description of $\delta_R$. Let $(J,\omega_J)\in E^d(R)$, where $J$ is a reduced ideal of height $d$. Now $\omega_J:(R/J)^d\twoheadrightarrow J/J^2$ is given by $J=\langle\,a_1,\cdots,a_d\rangle+J^2$, for some $a_1,\cdots,a_d\in J$. Applying the Nakayama Lemma one obtains $s\in J^2$ such that $J=\langle\,a_1,\cdots,a_d,s\rangle$
with $s-s^2=a_1b_1+\cdots +a_d b_d$ for some $b_1,\cdots,b_d\in R$. $\delta_R$
takes $(J,\omega_J)$ to the orbit $[a_1,\cdots a_d,1-2s]\in Um_{d+1}(R)/E_{d+1}(R)$.
A series of remarks are in order.
\refstepcounter{theorem}\paragraph{{\bf Remark} \thetheorem} If the characteristic of $k$ is $2$, then clearly $\delta_R$ turns out to be the trivial map.
\refstepcounter{theorem}\paragraph{{\bf Remark} \thetheorem}\label{original} In our orginial definition in \cite{dtz2}, we defined $\delta_R((J,\omega_J))=[2a_1,\cdots 2a_d,1-2s]$. Note that, if $\sqrt{2}\in R$, then the two definitions coincide (apply \cite[Lemma 3.5 (ii)]{vk2}). In particular, they do so when $k=\mathbb R$ and the results in \cite{dtz2} go through with the above definition of $\delta_R$.
\refstepcounter{theorem}\paragraph{{\bf Remark} \thetheorem} Note that $(1-2s)^2\equiv 1$ modulo the ideal $\langle\,a_1,\cdots a_d\rangle$ and therefore, the image of $\delta_R$ is hitting the orbits of some special type of unimodular rows. Conversely, let an orbit $[v]=[x_1,\cdots,x_d,z]\in Um_{d+1}(R)/E_{d+1}(R)$ be such that the ideal $\langle\,x_1,\cdots,x_d\rangle$ is reduced of height $d$, and $z^2\equiv 1$ modulo $\langle\,x_1,\cdots,x_d\rangle$. If $\frac{1}{2}\in R$, then $[v]$ is in the image of $\delta_R$.
\section{Special case: Mennicke-like group structure}\label{menn} We will say that the group structure on $Um_{d+1}(R)/E_{d+1}(R)$ is \emph{Mennicke-like}\footnote{In literature it has been described as \emph{nice group structure}. Ravi Rao suggested us to use the term \emph{Mennicke-like}.} if for two orbits $[a_1,\cdots,a_d,x],[a_1,\cdots,a_d,y]\in Um_{d+1}(R)/E_{d+1}(R)$ we have the \emph{coordinate-wise product}: $$[a_1,\cdots,a_d,x][a_1,\cdots,a_d,y]=[a_1,\cdots,a_d,xy].$$
Throughout this section, let $R$ be a smooth affine domain of dimension $d\geq 2$ over an infinite perfect field $k$.
\begin{lemma}\label{torsion} Let the group structure on $Um_{d+1}(R)/E_{d+1}(R)$ be Mennicke-like. Let $(J,\omega_J)\in E^{d}(R)$ be any element. Then $\delta_R((J,\omega_J))$ is $2$-torsion. \end{lemma}
\paragraph{Proof} If $Char(k)=2$, then $\delta_R$ is trivial and we are done. Therefore, we assume that $Char(k)\neq 2$. Let $\omega_J$ be induced by $J=\langle\,a_1,\cdots,a_d\rangle+J^2$. Then, there exists $s\in J^2$ such that $J=\langle\,a_1,\cdots,a_d,s\rangle$ with $s-s^2\in \langle\,a_1,\cdots,a_d\rangle$. By definition, $\delta_R((J,\omega_J))=[a_1,\cdots,a_d,1-2s]$. As the group law is Mennicke-like, $$[a_1,\cdots,a_d,1-2s]^2=[a_1,\cdots,a_d,(1-2s)^2]=[a_1,\cdots,a_d,1].\qed$$
\begin{theorem}\label{mennicke} Let the group structure on $Um_{d+1}(R)/E_{d+1}(R)$ be Mennicke-like. Then $\delta_R$ is a morphism of groups. \end{theorem}
\paragraph{Proof} As in the above lemma, we may assume that $Char(k)\neq 2$. Let $(J,\omega_J), (K,\omega_K)\in E^d(R)$ be such that $J+K=R$, where $J,K$ are both reduced ideals of height $d$. Then $(J,\omega_J)+ (K,\omega_K)=(J\cap K,\omega_{J\cap K})$, where $\omega_{J\cap K}$ is induced by $\omega_J$ and $\omega_K$. To prove the theorem, it is enough to show that $$\delta_R((J,\omega_J))\,\ast\,\delta_R((K,\omega_K))=\delta_R ((J\cap K,\omega_{J\cap K})),$$ where $\ast$ denotes the product in $Um_{d+1}(R)/E_{d+1}(R)$
Let $\omega_{J\cap K}$ be induced by $J\cap K=\langle\,a_1,\cdots,a_d\rangle+(J\cap K)^2$. Then $J=\langle\,a_1,\cdots,a_d\rangle+J^2$ and $K=\langle\,a_1,\cdots,a_d\rangle+K^2$. Let $J=\langle\,a_1,\cdots,a_d,s\rangle$ with $s-s^2\in \langle\,a_1,\cdots,a_d\rangle$ and $K=\langle\,a_1,\cdots,a_d,t\rangle$ with $t-t^2\in \langle\,a_1,\cdots,a_d\rangle$, as usual. Then it follows that $J\cap K=\langle\,a_1,\cdots,a_d,st\rangle$ and $st-s^2t^2\in \langle\,a_1,\cdots,a_d\rangle$.
By the definition of the map $\delta_R$, we have: \begin{enumerate} \item
$\delta_R ((J,\omega_J))=[a_1,\cdots,a_d,1-2s]$, \item $\delta_R ((K,\omega_K))=[a_1,\cdots,a_d,1-2t]$, \item $\delta_R((J\cap K,\omega_{J\cap K}))=[a_1,\cdots,a_d,1-2st]$. \end{enumerate} As the group law in $Um_{d+1}(R)/E_{d+1}(R)$ is Mennicke-like, we have $$[a_1,\cdots,a_d,1-2s][a_1,\cdots,a_d,1-2t]=[a_1,\cdots,a_d,1-2s-2t+4st].$$ Let us try to locate a pre-image of the element on the right hand side of the above equation. To this end, we consider the following ideal $$L=\langle\,a_1,\cdots,a_d,s+t-2st\rangle=\langle\,a_1,\cdots,a_d,s^2+t^2-2st\rangle=\langle\,a_1,\cdots,a_d,(s-t)^2\rangle$$ in $R$ and note that $L+J\cap K=R$ (as $s-t$ is a unit modulo $J\cap K$). Let `bar' denote modulo $\langle a_1,\cdots,a_d\rangle$. Then, $$\overline{L} \cap \overline{J\cap K}=\langle\,\overline{s}\overline{t}\rangle\langle\,\overline{s}+\overline{t}-\overline{2st}\rangle=\langle\,\overline{s^2t}+\overline{st^2}-2\overline{s}\overline{t}\rangle= \langle\,\overline{s}\overline{t}+\overline{s}\overline{t}-2\overline{s}\overline{t}\rangle=\langle\,\overline{0}\rangle,$$ and we have $L\cap (J\cap K)=\langle\,a_1,\cdots,a_d\rangle$. Therefore, $(L,\omega_L)+(J\cap K,\omega_{J\cap K})=0$, where $\omega_L$ is induced by the images of $a_1,\cdots,a_d$ in $L/L^2$. It is easy to see that $\delta_R((L,\omega_L))= [ a_1,\cdots,a_d,1-2s-2t+4st]$. Finally, we conclude (using (\ref{torsion})) that $$\delta_R ((J,\omega_J))\,\ast\,\delta_R((K,\omega_K))=\delta_R((L,\omega_L))=\delta_R((J\cap K,\omega_{J\cap K}))^{-1}=\delta_R((J\cap K,\omega_{J\cap K})).\qed$$
\section{The general case}\label{general} In this section treat the general case. Our line of arguments (Theorem \ref{main} aided by Proposition \ref{prep}) may be termed as \emph{``Mennicke-Newman for ideals"}. For the Mennicke-Newman Lemma for elementary orbits of unimodular rows, see \cite[Lemma 3.2]{vk3}.
\begin{lemma} Let $I_1,I_2$ be two comaximal ideals in a ring $R$ such that $I_1\neq I_1^2$ and $I_2\neq I_2^2$. Then we can find $x\in I_1\smallsetminus I_1^2$ and $y\in I_2\smallsetminus I_2^2$ such that $x+y=1$. \end{lemma}
\paragraph{Proof} As $I_1^2+I_2^2=R$, we can find $a\in I_1^2$, $b\in I_2^2$ such that $a+b=1$.
Claim: $I_1\cap I_2\not\subseteq I_1^2$. To see this note that $I_1^2+I_2=R$, and we have $$I_1=I_1\cap R=I_1\cap (I_1^2+I_2)=I_1^2+I_1\cap I_2.$$ If $I_1\cap I_2\subseteq I_1^2$, then $I_1=I_1^2$, contrary to the hypothesis. Similarly, $I_1\cap I_2\not\subseteq I_2^2$.
Therefore, we can choose $\alpha\in I_1\cap I_2\smallsetminus (I_1^2\cup I_2^2)$. Take $x=a-\alpha$ and $y=b+\alpha$ to conclude. \qed
\begin{proposition}\label{prep} Let $R$ be a ring of dimension $d\geq 2$. Let $J=\mm_1\cap \cdots \cap\mm_r$ and $K=\mm_{r+1}\cap \cdots \cap \mm_s$ be two ideals, each of height $d$, where $\mm_i$ are all disinct maximal ideals for $i=1,\cdots ,s$. Then, there exist $x\in J$ and $y\in K$ such that: \begin{enumerate} \item $x+y=1$, \item $x\not\in \mm_1^2\cup \cdots \mm_s^2$ and $y\not\in \mm_1^2\cup \cdots \mm_s^2$.
\end{enumerate} \end{proposition}
\paragraph{Proof} As $J^2+K^2=R$, we can find $a\in J^2$ and $b\in K^2$ such that $a+b=1$. We claim that there exists $c\in J\cap K$ such that $c\not\in \mm_1^2\cup \cdots \mm_s^2$. If we can prove the claim, we will take $x=a-c$ and $y=a+c$ to prove the proposition.
\noindent \emph{Proof of the claim.} We have $\mm_1^2+\mm_2^2\cdots \mm_s^2=R$. Choose $f\in \mm_1^2$ and $g\in \mm_2^2\cdots \mm_s^2$ so that $f+g=1$.
Observe that $\mm_1\cap (\mm_2^2\cdots \mm_s^2)\not\subseteq \mm_1^2$ (to see this, use the above lemma to obtain $z\in \mm_1\smallsetminus \mm_1^2$ and $w\in \mm_2^2\cdots \mm_s^2$ so that $z+w=1$. Assume, if possible, that $\mm_1\cap (\mm_2^2\cdots \mm_s^2)\subseteq \mm_1^2$. As $z=z^2+wz$ and $wz\in \mm_1\cap (\mm_2^2\cdots \mm_s^2)$ it would follow that $z\in \mm_1^2$. Contradiction.)
Choose $\alpha \in \mm_1\cap (\mm_2^2\cdots \mm_s^2)\smallsetminus \mm_1^2$ and take $c_1=f-\alpha$, $c_1'=g+\alpha$. Then, we have: (1) $c_1+c_1'=1$, (2) $c_1\in \mm_1\smallsetminus \mm_1^2$, (3) $c_1\equiv 1$ modulo $\mm_i^2$ for all $i\neq 1$.
Following a similar method, for each $i=1,\cdots ,s$, choose $c_i\in \mm_i\smallsetminus \mm_i^2$ so that $c_i\equiv 1$ modulo $\mm_1^2\cdots \mm_{i-1}^2\mm_{i+1}^2\cdots \mm_s^2$. Take $c=\prod_{i=1}^{s}c_i$. Then $c\in \mm_1\cdots \mm_s$ and it is easy to check that $c\not\in \mm_i^2$ for any $i$. This completes the proof of the claim. \qed
\begin{theorem}\label{main} Let $R$ be a smooth affine domain of dimension $d\geq 2$ over an infinite perfect field $k$. Then $\delta_R:E^d(R)\longrightarrow Um_{d+1}(R)/E_{d+1}(R)$ is a morphism of groups. \end{theorem}
\paragraph{Proof} If $Char(k)=2$, then $\delta_R$ is trivial and we are done. Therefore, we assume that $Char(k)\neq 2$.
Let $(J,\omega_J), (K,\omega_K)\in E^d(R)$ be such that $J+K=R$, where $J,K$ are both reduced ideals of height $d$. Then $(J,\omega_J)+ (K,\omega_K)=(J\cap K,\omega_{J\cap K})$, where $\omega_{J\cap K}$ is induced by $\omega_J$ and $\omega_K$. To prove the theorem, it is enough to show that $$\delta_R((J,\omega_J))\,\ast\,\delta_R((K,\omega_K))=\delta_R ((J\cap K,\omega_{J\cap K})),$$ where $\ast$ denotes the product in $Um_{d+1}(R)/E_{d+1}(R)$.
Let $J=\mathfrak{m}_1\cap \cdots \cap \mathfrak{m}_r$ and $K=\mathfrak{m}_{r+1}\cap \cdots \cap \mathfrak{m}_s$. Applying the above proposition, choose $x\in J$ and $y\in K$ such that
$x+y=1$ and $x\not\in \mm_1^2\cup \cdots \mm_s^2$ and $y\not\in \mm_1^2\cup \cdots \mm_s^2$. Then $xy\in (J\cap K)\smallsetminus (J\cap K)^2$. As $x+y=1$, it is easy to check that for each $i$, the image of $xy$ in $\mathfrak{m}_i/\mathfrak{m}_i^2$ is not trivial. Therefore, $xy$ is a part of a basis of $\mathfrak{m}_i/\mathfrak{m}_i^2$, for each $i$, $1\leq i\leq s$. Consequently, $xy$ is a part of generators of $(J\cap K)/(J\cap K)^2$.
Similarly, $x$ is a part of generators of $J/J^2$ and $y$ is a part of generators of $K/K^2$.
Let $J\cap K=\langle\,xy,a_1,\cdots,a_{d-1}\rangle+(J\cap K)^2$ for some $a_1,\cdots,a_{d-1}\in J\cap K$. Let $\omega'_{J\cap K}:(R/(J\cap K))^d\twoheadrightarrow (J\cap K)/(J\cap K)^2$ denote the corresponding surjection.
By \cite[2.2 and 5.0]{brs3} there is a unit $u$ modulo $J\cap K$ such that
$(J\cap K,\omega_{J\cap K})=(J\cap K,u\omega'_{J\cap K})$ in $E^d(R)$. Therefore, $(J\cap K,\omega_{J\cap K})$ is given by $J\cap K=\langle\,xy,ua_1,a_2,\cdots,a_{d-1}\rangle+(J\cap K)^2$. Similarly, $J=\langle\,x,ua_1,a_2,\cdots,a_{d-1}\rangle+J^2$ gives $(J,\omega_J)$ and $K=\langle\,x,ua_1,a_2,\cdots,a_{d-1}\rangle+K^2$ gives $(K,\omega_K)$.
We can choose $s\in J\cap K$ such that $s-s^2\in \langle\,xy,ua_1,a_2,\cdots,a_{d-1}\rangle$ and $J\cap K=\langle\,xy,ua_1,a_2,\cdots,a_{d-1},s\rangle$. As $s-s^2\in \langle\,x,ua_1,a_2,\cdots,a_{d-1}\rangle$ and $s-s^2\in \langle\,y,ua_1,a_2,\cdots,a_{d-1}\rangle$ as well, it follows that $(J,\omega_J)$ corresponds to $J=\langle\,x,ua_1,a_2,\cdots,a_{d-1},s\rangle$ and $(K,\omega_K)$ corresponds to $K=\langle\,y,ua_1,a_2,\cdots,a_{d-1},s\rangle$ (to check this, use $x+y=1$).
We then have, \begin{enumerate} \item $\delta_R((J,\omega_J))=[x,ua_1,a_2,\cdots,a_{d-1},1-2s],$ \item $\delta_R((K,\omega_K))=[y,ua_1,a_2,\cdots,a_{d-1},1-2s],$ \item $\delta_R ((J\cap K,\omega_{J\cap K}))=[xy,ua_1,a_2,\cdots,a_{d-1},1-2s].$ \end{enumerate}
It follows that $\delta_R((J,\omega_J))\,\ast\,\delta_R((K,\omega_K))=\delta_R ((J\cap K,\omega_{J\cap K}))$, as $x+y=1$. \qed
\section{A few remarks} Let $R$ be a smooth affine domain of dimension $d\geq 2$ over an infinite perfect field $k$. We now recall the definition of a group homomorphism $\phi_R:Um_{d+1}(R)/E_{d+1}(R)\to E^d(R)$. When $d$ is even, $\phi_R$ has been defined in \cite{brs3}. The extension to general $d$ is available in \cite{dz,vk4}. We urge the reader to look at \cite[Section 4]{dz} for the details.
\refstepcounter{theorem}\paragraph{{\bf Definition} \thetheorem}\label{phi} Let $v=(a_1,\cdots,a_{d+1})\in Um_{d+1}(R)$. Applying elementary transformations if necessary, we may assume that the height of the ideal $\langle a_1,\cdots,a_d\rangle$ is $d$. Write $J=\langle a_1,\cdots,a_d\rangle$ and let $\omega_J:R^d\twoheadrightarrow J$ be the surjection induced by $a_1,\cdots,a_d$. As $a_{d+1}$ is a unit modulo $J$, we have $J=\langle a_1,\cdots,a_da_{d+1}\rangle+J^2$ and the corresponding element in $E^d(R)$ is $(J,\overline{a_{d+1}}\omega_J)$. Let $[v]$ denote the orbit of $v$ in $Um_{d+1}(R)/E_{d+1}(R)$. Define $\phi_R([v])=(J,\overline{a_{d+1}}\omega_J)$. It is proved in \cite{dz,vk4} that $\phi_R$ is a morphism.
In passing, we record some observations on the composite maps $\delta_R\phi_R$ and $\phi_R\delta_R$ (the latter played a crucial role in \cite{dtz2}).
\begin{theorem}\label{comp1} Let $R$ be a smooth affine domain of dimension $d\geq 2$ over an infinite perfect field $k$. For any $(J,\omega_J)\in E^d(R)$, we have $$\phi_R\delta_R((J,\omega_J))=(J,\omega_J)-(J,-\omega_J).$$ \end{theorem}
\paragraph{Proof} Essentially the same proof as \cite[Theorem 2.10]{dtz2}. \qed
\begin{theorem}\label{comp2} Let $R$ be a smooth affine domain of dimension $d\geq 2$ over an infinite perfect field $k$. Assume further that $\sqrt{2}\in R$. Let $v=(a_1,\cdots,a_{d+1})\in Um_{d+1}(R)$
and let $v^{\ast}$ denote the ``antipodal" vector $(-a_1,\cdots,a_{d+1})\in Um_{d+1}(R)$. Then we have $$\delta_R\phi_R([v])=[v][{v^{\ast}}]^{-1}.$$ \end{theorem}
\paragraph{Proof} Let $v=(a_1,\cdots,a_{d+1})\in Um_{d+1}(R)$. Recall from (\ref{phi}) that $\phi_R$ takes $[v]$ to $(J,\omega_J)$ where $J=\langle a_1,\cdots,a_{d}\rangle$ and $\omega_J$ is induced by $J=\langle a_1,\cdots,a_{d-1},a_da_{d+1}\rangle +J^2.$
Now as $v\in Um_{d+1}(R)$, there exist $b_1,\cdots,b_{d+1}\in R$ such that $a_1b_1+\cdots+a_{d+1}b_{d+1}=1$. Multiplying both sides by
$a_{d+1}b_{d+1}$ we get
$$a_1b_1'+\cdots+a_{d-1}b_{d-1}'+a_da_{a_{d+1}}b_d'+(a_{d+1}b_{d+1})^2=a_{d+1}b_{d+1},$$
implying that $a_1b_1'+\cdots+a_{d-1}b_{d-1}'+a_da_{a_{d+1}}b_d'=a_{d+1}b_{d+1}-(a_{d+1}b_{d+1})^2.$ Note that $1-a_{d+1}b_{d+1}=a_1b_1+\cdots+a_{d}b_{d}\in J$. Therefore, $$\delta_R((J,\omega_J))=[a_1,\cdots,a_{d-1},a_da_{d+1},1-2(1-a_{d+1}b_{d+1})]$$ $$= [a_1,\cdots,a_{d-1},a_da_{d+1},2a_{d+1}b_{d+1}-1] =\delta_R\phi_R([v]).$$
Now consider $v^*=(a_1,\cdots,a_d,-a_{d+1})\in Um_{d+1}(R)$, the \emph{antipodal} of $v$. Then by \cite[Lemma 3.5(iii)]{vk2}, $[a_1,\cdots,a_d,-a_{d+1}]^{-1}=[a_1,\cdots,a_d,b_{d+1}] \text{ in } Um_{d+1}(R)/E_{d+1}(R).$
We now compute: \begin{eqnarray*} \delta_R\phi_R([v])=[a_1,\cdots,a_{d-1},a_da_{d+1},2a_{d+1}b_{d+1}-1]\\ =[a_1,\cdots,a_{d-1},2a_da_{d+1},2a_{d+1}b_{d+1}-1] \text{ (as 2 is a square) }\\ =[a_1,\cdots,a_{d+1}][a_1,\cdots,a_d,b_{d+1}] \text{ by \cite[3.5 (i)]{vk2} }\\ =[a_1,\cdots,a_{d+1}][a_1,\cdots,a_d,-a_{d+1}]^{-1} =[v][{v^{\ast}}]^{-1}. \end{eqnarray*} The proof is therefore complete. \qed
Let $X=\text{Spec}(R)$ be a smooth affine variety of dimension $d\geq 2$ over ${\mathbb R}$. Let $X({\mathbb R})$ denote the set of real points of $X$. Assume that $X({\mathbb R})\neq \emptyset$. Then $X({\mathbb R})$ is a smooth real manifold. Let ${\mathbb R}(X)$ denote the ring obtained from $R$ by inverting all the functions which do not have any real zeros.
We can apply (\ref{mennicke}) to obtain the following result.
\begin{theorem} Let $X=\text{Spec}(R)$ be a smooth affine variety of dimension $d\geq 2$ over ${\mathbb R}$ such that $X({\mathbb R})$ is orientable, and the number of compact connected componenets of $X({\mathbb R})$ is at least one. Then the group structure on $Um_{d+1}(R)/E_{d+1}(R)$ can never be Mennicke-like. \end{theorem}
\paragraph{Proof} In our earlier paper \cite{dtz2}, we proved the following assertions: \begin{enumerate} \item $Um_{d+1}(R)/E_{d+1}(R)= Um_{d+1}({\mathbb R}(X))/E_{d+1}(({\mathbb R}(X))\bigoplus K$, \item $\delta_{{\mathbb R}(X)}:E^d({\mathbb R}(X))\longrightarrow Um_{d+1}({\mathbb R}(X))/E_{d+1}(({\mathbb R}(X))$ is an isomorphism, \item $Um_{d+1}({\mathbb R}(X))/E_{d+1}(({\mathbb R}(X))\by \sim \bigoplus_{t} \bz$, where $t$ is the number of compact connected components of $X({\mathbb R})$. \end{enumerate}
Now, assume that the group structure on $Um_{d+1}({\mathbb R}(X))/E_{d+1}({\mathbb R}(X))$ is Mennicke-like. Then, by (\ref{mennicke}), we shall find non-trivial orbits which are $2$-torsion. But as $t\geq 1$, the group $Um_{d+1}({\mathbb R}(X))/E_{d+1}({\mathbb R}(X))$ is non-trivial and is free abelian. Thus we arrive at a contradiction. As $Um_{d+1}({\mathbb R}(X))/E_{d+1}({\mathbb R}(X))$ is a subgroup of $Um_{d+1}(R)/E_{d+1}(R)$, the theorem follows. \qed
\refstepcounter{theorem}\paragraph{{\bf Remark} \thetheorem} In \cite{dtz2} we also computed the universal Mennicke symbol $MS_{d+1}(R)$, where $R$ is as in the above theorem. It follows from there as well that the group structure on $Um_{d+1}(R)/E_{d+1}(R)$ can never be Mennicke-like. The arguments given above only avoids the computation of $MS_{d+1}(R)$.
We now comment on the case when $X({\mathbb R})$ is non-orientable.
\begin{theorem} Let $X=\text{Spec}(R)$ be a smooth affine variety of dimension $d\geq 2$ over ${\mathbb R}$ such that $X({\mathbb R})$ is non-orientable. Then $\delta_{{\mathbb R}(X)}:E^d({\mathbb R}(X))\longrightarrow Um_{d+1}({\mathbb R}(X))/E_{d+1}({\mathbb R}(X))$ is a surjective morphism. As a consequence, $Um_{d+1}({\mathbb R}(X))/E_{d+1}({\mathbb R}(X))$ is an $\bz/2\bz$-vector space of dimension $\leq t$, where $t$ is the number of compact connected componenets of $X({\mathbb R})$. \end{theorem}
\paragraph{Proof} It has already been proved in \cite[Theorem 3.2]{dtz2} that $\delta_{{\mathbb R}(X)}$ is surjective. In this article we proved that $\delta_{{\mathbb R}(X)}$ is a morphism. As $E^d({\mathbb R}(X))=\bigoplus_t \bz/2\bz$, the result follows. \qed
\end{document} | arXiv |
A high capacity reversible data hiding through multi-directional gradient prediction, non-linear regression analysis and embedding selection
Kuo-Ming Hung ORCID: orcid.org/0000-0002-9174-52561 na1,
Chi-Hsiao Yih2,
Cheng-Hsiang Yeh2 &
Li-Ming Chen2
EURASIP Journal on Image and Video Processing volume 2020, Article number: 8 (2020) Cite this article
The technique of reversible data hiding enables an original image to be restored from a stego-image with no loss of host information, and it is known as a reversible data hiding algorithm (RDH). Our goal is to design a method to predict pixels effectively, because the more accurate the prediction is, the more concentrated the histogram is, and it minimizes shifting to avoid distortion. In this paper, we propose a new multi-directional gradient prediction method to generate more accurate prediction results. In embedding stage, according to the embedding capacity of information, we generate the best decision based on non-linear regression analysis, which can differentiate between embedding region and non-embedding region to reduce needless shifting. Finally, we utilize the automatic embedding range decision. With sorting by the amount of regional variance, the easier predicted region is prior for embedding, and the quality of the image is improved after embedding. To evaluate the proposed reversible hiding scheme, we compared other methods on different pictures. Results show that the proposed scheme can embed much more data with less distortion.
Reversible data hiding (RDH) approach has been applied to some sensible and crucial messages, such as medical images, military pictures, criminal site pictures, digital files of rare artworks. These applications are not allowed to have any distortion. Over the past decade, the reversible data hiding techniques have been proposed. It can be divided into spatial domain [1–18] and frequency domain [19, 20]. In the spatial domain, these techniques also can be divided into three classifications: lossless compression, expansion based (EB), and histogram shifting (HS). In recent years, RDH was based on lossless compression [1–4]. These methods utilized storage space for embedding. [3] proposed the least significant bit (LSB) method, which increases compression efficiency by means of change in least bits to act as supplementary information. However, it cannot get a satisfying performance based on the method of lossless compression, because of the capacity embedding increases, more bit change is needed. As a result, distortion is more dramatic. Difference expansion (DE) was first proposed by [5]. This method was divided an image into pixel pairs and each pair to hide a message was used. [6, 7] also proposed a concept of triplets and tetrads DE. [8] first proposed histogram shifting (HS) in 2006. This method makes a histogram statistic from an image. After generating a histogram, the vacancies between the zero point and the peak point are utilized for shifting. It relies heavily on the most frequently found pixel value. As a result, though HS has good image quality, its hiding ability is lower than DE generally. [9] utilized relevant characteristics of neighbor image by dividing an image into N images by sub-sampling to improve the 0 bin performance of the embedding histogram. Luo et al. [10] proposed the median preservation in each group as a reference pixel to generate an improved histogram. As a result, the RDH system is better than Kim et al.'s. [11] proposed an inverse "s" scan method to increases the concentration of the histogram. [12] proposed also another structure of prediction. The author used a rhombus pattern arrangement to split the image into two groups. Furthermore, [13] used the 2-staged checkerboard prediction proposed by himself [14]. Author assumed the hidden data must be embedding in small-capacity of bins and bins − 2, − 1, 0, 1, and 2 should be kept without change, due to all bins are shift necessarily before embedding. Therefore, both the embedding capacity and image quality still much left for improvement to be improved.
Besides mentioned works above [1–14], many other RDH algorithms are also based on histogram shifting method using different techniques, i.e., Qin et al. [15] proposed a prediction-based reversible steganographic scheme based on image inpainting. This method utilizes partial differential equation (PDE) based on CDD model to effectively predict the structure and geometric information of the original image according to select reference pixels. However, its thresholds of the reference pixels are selected by experimental rule and it also has a problem of large amount of the computation complexity. [16] proposed a general framework, which can simply designing the so-called shifting and embedding functions to reduce the amount of distortion when embedding. Their method actually has good effect when the amount of embedding messages is small, but when the amount of embedding messages gets bigger, the distortion will relatively become higher. The use of least square predictor has been proposed to overcome the limitation of the fixed predictors by [17]. It applied least absolute shrinkage and selection operator over normal least square predictor with rhombus-shaped two-stage embedding scheme. Wang et al. [18] proposed a reversible data hiding based on multiple histogram shifting with rate and distortion optimization. Traditional schemes used experimental rule to determine the number of optimal peak and zero bin pairs and their corresponding values. In order to solve this problem, the genetic algorithm (GA) in optimization algorithm is applied to search the nearly optimal zero and peak bins, and to achieve the purpose of increasing capacity and reducing distortion.
On the other hand, in recent years, due to privacy requirement, image owners often encrypt the original content before transferring it to a data manager. The data manager wants to embed other messages in an encrypted image for authentication, even the content of the original image is unknown to him. Therefore, image encryption is a method that protects the contents of the original images for the owners of the images. Generally, there are three roles in the reversible data hiding method on image encryptions, including content owners, data hiders, and receivers. The current encrypted RDH scheme embedding mechanism can be divided into two processes, one is reserving room before encryption (RRBE) [21–23], and the other is vacating room after encryption (VRAE) [24–28]. In general, the scheme of the RRBE process has better hiding power and recovery ability than the scheme of the VRAE process. But, VRAE process scheme does not need additional pre-processing before image encryption, which can reduce the computational burden for content owners [28]. [21] applied the patch-level sparse representation to hide the secret data. Due to encoding by sparse representation, it can achieve a large vacant space, thus data hiding can embed more secret messages in image encryption. [27] proposed a reversible data hiding scheme for encrypted images based on the adaptive embedding method. This method makes two different embedding strategies for larger hiding capacity and applies progressive decryption to obtain better quality of decrypted image. [28] also proposed a new separable reversible data hiding in encrypted images via adaptive embedding strategy with block selection. Author separated the encrypted blocks into two sets corresponding to smooth and complex regions in original image. The data-hiding key is used to vacate to accommodate additional bits by compressing LSBs of the block set corresponding to smooth region.
After studying the algorithms that were developed by previous RDH researchers, we find that the embedding capacity and the image quality of embedding are quite depended on the prediction method of RDH algorithm. We find a common problem of previous algorithms for embedding, that is, in order to achieve the reversible requirement, even if the current position cannot be embedded, it would also be shifted with the reversible condition, and that will lead to the distortion of image and lower the image quality after embedding.
In this paper, we propose a new multi-directional gradient prediction, which can generate more accurate prediction results. It is the most critical factor to affect the performance of the RDH algorithm. And, we also design a new method that can get the best decision method based on non-linear regression analysis and self-block standard deviation statistics to differentiate between embedding region and non-embedding region, which can reduce the shifting of the non-embedding region to generate the best quality of image. Finally, the automatic embedding range decision with sorting by the amount of regional variance is proposed. It can prioritize the region which can be predicted easily to improve the quality of the image after embedding.
Experimental results demonstrate that our proposed method can effectively reduce the distortion of image after embedding. Six images from the USC-SIPI standard testing database [29] and 1000 images that we collected for related performance evaluations are used, the results indicated that the RDH algorithm we proposed is better than the existing five RHD algorithms, Kim et al. (2009) [9], Sachnev et al. (2009) [12], Luo et al. (2011) [10], Zhao et al.(2011) [11], and Rad et al. (2016) [13].
The main contributions of this paper are summarized as follows:
We develop a new multi-directional gradient prediction, which can generate more accurate prediction results.
We design a new method that can get the best decision method based on non-linear regression analysis and self-block standard deviation statistics to generate the best quality of image.
Automatic embedding range decision is proposed, which can prioritize the region which can be predicted easily to improve the quality of the image after embedding.
The remainder of the paper is organized as follows. Section 2 briefly reviews the embedding procedure of two-stage embedding scheme using rhombus pattern [12]. The proposed reversible data hiding scheme is described in Section 3. Experimental results and discussions are generally described in Section 4. Section 5 concludes this paper, highlighting the main conclusions and future works.
In this section, we take Sachnev et al. [12] as an example to introduce the details of two-stage embedding scheme using rhombus pattern is exploited. Suppose the host image I is a m×n gray-scale image. The data embedding, extracting, and reversing processes can be described in the following steps.
The host image is divided into two sets: "cross" set and "circle" set. The cross set is used for embedding data and circle set for computing predictors.
Difference computation and histogram construction
The cross set is predicted from the average of four neighboring pixels with the circle set. Suppose the center pixel M(i,j) of the black can be predicted from four neighboring pixels M(i,j−1),M(i+1,j), M(i,j+1), and M(i−1,j). The predicted value P(i,j) is computed as follows:
$$\begin{array}{*{20}l} &{} P(i,j)\\ &{} =\left \lfloor \frac{M(i,j-1)+M(i+1,j)+M(i,j+1)+M(i-1,j)}{4} \right \rfloor \end{array} $$
The prediction error e(i,j) is computed based on the predicted value M(i,j) and original value I(i,j) as
$$ e(i,j)=I(i,j)-P(i,j) $$
Use of sorting
Using sorted prediction errors can embed more data into the image with less distortion. Note that the cross and circle sets of the rhombus scheme are independent each other, because sorting is possible only when cells are independent. Therefore, the blocks can be rearranged by sorting according to the correlation of neighboring pixels. Local variance u(i,j) for each black can be computed from the neighboring pixels M(i,j−1), M(i+1,j) and M(i,j+1), M(i−1,j) as follows
$$\begin{array}{@{}rcl@{}} {}u(i,j)&=&\frac{1}{4}\sum_{k=1}^{4}(\Delta vk-\Delta \bar{v})^{2} \\ {}{\text{where}} \\ \Delta v1&=&\left | M(i,j-1)-M(i-1,j) \right | \\ \Delta v2&=&\left | M(i-1,j)-M(i,j+1) \right | \\ \Delta v3&=&\left | M(i,j+1)-M(i+1,j) \right | \\ \Delta v4&=&\left | M(i+1,j)-M(i,j-1) \right | \\ \Delta \bar{v}k&=&(\Delta v1+\Delta v2+\Delta v3+\Delta v4)/4 \end{array} $$
Embedding method
After the blocks are rearranged by sorting local variances, the hidden message h can be embed by modifying the histogram shift scheme, where h∈{0,1}. Two threshold values T1 and T2 are used, where T1 is the positive threshold value and T2 is the negative threshold value. The message embedding can be formulated as follows:
$$ e'(i,j)= \left\{\begin{array}{ccc} e(i,j)+T1+1 & if & e(i,j)>T1\ {\text{and}}\ T1\geq0 \\ e(i,j)+T2 & if & e(i,j)<T2\ {\text{and}}\ T2<0 \\ 2e(i,j)+h & if & T2\leq e(i,j)\leq T1 \end{array}\right. $$
where h∈{0,1} is the current scanned hidden bit. After embedding the hidden data, the stego-image S is obtained as
$$ S(i,j)=e'(i,j)+P(i,j) $$
Extracting and reversing methods
Prediction error e′(i,j) can be obtained by
$$ e'(i,j)=S(i,j)-P(i,j) $$
Hidden bit h can be extracted by
$$ h=e'(i,j)\ {\text{mod}}\ 2 \quad if \quad 2\times T2 \leq e'(i,j) \leq 2\times T1 + 1 $$
Original prediction error e(i,j) can be generated as follows:
$$ {}e(i,j)= \left\{\begin{array}{ccc} e'(i,j)-T1-1 & if & e'(i,j)>2\times T1+1 \\ e'(i,j)-T2 & if & e'(i,j)<2\times T2 \\ \lfloor e'(i,j)/2 \rfloor & if & 2\!\times\! T2 \!\leq\! e'(i,j)\! \leq\! 2\times T1 \,+\, 1 \end{array}\right. $$
Recovery of the value of the original image I(i,j) is as follows:
$$ I(i,j)=e'(i,j)+P(i,j) $$
In addition, the embedding, extracting, and reversing methods for the "circle" set are the same.
Compared to existing reversible data hiding methods, the proposed method can embed much more data with less distortion. The proposed framework is mainly based [12] and is divided into six sub-sections, including prediction via multi-directional gradient scheme, embedding algorithm, embedding selection by non-linear regression analysis and self-block standard deviation statistics, automatic embedding range decision, extracting and reversing algorithm, and overflow and underflow problem.
Prediction via multi-directional gradient scheme
The accuracy of prediction method can determine the embedding capacity of an RDH system as well as the image quality after the embedding. In this paper, we propose a multi-directional gradient prediction method. Original image is divided into the cross set, the star, the circle, and square for four embedding. The block diagram of embedding process is shown in Fig. 1, and the block diagrams of extracting and reversing process are shown in Fig. 2.
The block diagram of embedding process
The block diagram of extracting and reversing process
The prediction procedure is described as following:
Assume I is a 5×5 8 bit grayscale original image, where I(i,j) is one pixel of the image, as shown in Fig. 3a. First, all pixels of the image I are divided into four groups "square," "cross," "star," and "circle" as shown in Fig. 3b. We define the four groups as G1, G2, G3, and G4, respectively. With their independency to each other, we can utilize G2, G3, and G4 to predict G1. We only discuss G1 in this section since G2,G3, and G4 are the same cases.
The operation steps of proposed method. a Original image. b Image classification. c Mirroring image. d Missing image. e Prediction image. f Mirroring prediction image
Mirror the 5×5 original image I into 7×7 mirror image MI, as shown in Fig. 3c.
The G1 is hidden as missing image, and then the four neighboring pixels are utilized to predict central pixel by Eq. (10), where MI(i,j) is the position of the predicted central pixel, as shown in Fig. 3d. Image PI is the prediction result of the 7×7 MI, as shown in Fig. 3e.
$$ {}\begin{aligned} {\text{PI}}(i,j)&={\text{round}}\left(\left({\text{MI}}(i,j-1)+{\text{MI}}(i+1,j)+{\text{MI}}(i,j+1)\right.\right.\\ &\quad\left.\left.+{\text{MI}}(i-1,j)\right)/4\right) \end{aligned} $$
In order to calculate the gradient information of the pixels of the border, the prediction image PI is mirrored into a 9×9 mirroring prediction image MPI, as shown in Fig. 3f. Afterwards, the multi-directional gradient information is generated through four kinds of sobel masks as shown in Fig. 4. The four masks are defined as mx, my, mxy, and myx, where mx is the horizontal mask, my is the vertical mask, mxy is 45∘ mask, myx is 135∘ mask, respectively.
The sobel mask for four kinds of direction a is horizontal mask (mx); b is vertical mask (my); c is 45∘ mask (mxy); d is 135∘ mask (myx)
We use Eqs. (11)–(14) to calculate the gradient information of the vertical direction Δx, the gradient information of the horizontal direction Δy, the gradient information of the 45∘ direction Δxy, and the gradient information of the 135∘ direction Δyx.
$$ \Delta x = |mx\times {\text{MPI}}| $$
$$ \Delta y = |my\times {\text{MPI}}| $$
$$ \Delta xy = |mxy\times {\text{MPI}}| $$
$$ \Delta yx = |myx\times {\text{MPI}}| $$
In order to generate the estimated image EI, we calculate the missing image MI by four kinds of gradient information Δx, Δy, Δxy, and Δyx, as indicated in Eqs. (15)–(22). With Eqs. (15)–(22), we can generate eight weights of the eight neighboring positions, x_weight1, x_weight2, y_weight1, y_weight2, xy_weight1, xy_weight2, yx_weight1, and yx_weight2, as shown in Fig. 5.
$$ \begin{aligned} x\_\text{weight1}&=\text{Weight}/\left(\Delta x(i,j)+{\text{Coe}}\times \Delta x(i,j-1)\right.\\ &\quad\left.+\Delta x(i,j-2)+1\right) \end{aligned} $$
The positions of the eight weights
$$ \begin{aligned} x\_\text{weight2}&=\text{Weight}/\left(\Delta x(i,j)+{\text{Coe}}\times \Delta x(i,j+1)\right.\\ &\quad\left.+\Delta x(i,j+2)+1\right) \end{aligned} $$
$$ \begin{aligned} y\_\text{weight1}&=\text{Weight}/\left(\Delta y(i,j)+{\text{Coe}}\times \Delta y(i-1,j)\right.\\ &\quad\left.+\Delta y(i-2,j)+1\right) \end{aligned} $$
$$ \begin{aligned} y\_\text{weight2}&=\text{Weight}/\left(\Delta y(i,j)+{\text{Coe}}\times \Delta y(i+1,j)\right.\\ &\quad\left.+\Delta y(i+2,j)+1\right) \end{aligned} $$
$$ {}\begin{aligned} xy\_\text{weight1}&=\text{Weight}/\left(\Delta xy(i,j)+{\text{Coe}}\!\times\! \Delta xy(i\,-\,1,j\,-\,1)\right.\\ &\quad\left.+\Delta xy(i-2,j-2)+1\right) \end{aligned} $$
$$ {}\begin{aligned} xy\_\text{weight2}&\,=\,\text{Weight}/\!\left(\!\Delta xy(i,j)+{\text{Coe}}\times \Delta xy(i+1,j+1)\right.\\ &\quad\left.+\Delta xy(i+2,j+2)+1\right) \end{aligned} $$
$$ {}\begin{aligned} yx\_\text{weight1}&\,=\,\text{Weight}/\left(\Delta yx(i,j)+{\text{Coe}}\times \Delta yx(i\,-\,1,j+1)\right.\\ &\left.+\Delta y(i-2,j+2)+1\right) \end{aligned} $$
$$ {}\begin{aligned} yx\_\text{weight2}&\,=\,\text{Weight}/\!\left(\!\Delta yx(i,j)\,+\,{\text{Coe}}\!\times\! \Delta yx(i+1,j-1)\right.\\ &\left.+\Delta y(i+2,j-2)+1\right) \end{aligned} $$
Where Weight and Coe are two weight parameters. In general, the closer the positions are, the more information is provided, such as the vertical weight and the horizontal weight. In contrast, the farther position results in the lack of the information provided, such as the 45∘ weight and 135∘ weight. Therefore, we use two parameters Weight and Coe to adjust the weight of the rule. In this paper, we apply PSO algorithm [30] to estimate the most appropriate weight values. This algorithm is applied to solve optimization problems, and refer to Section 4.3. On the other hand, if the amount of gradient information of the neighboring pixel is tremendous, the neighboring pixel contributes less to predict the central pixel. On the contrary, the pixel has more contribution. Moreover, the eight weights of the eight neighboring pixels are utilized by Eq. (23) to estimate the 5×5 estimated image P, where MI(i,j) represents one pixel of the missing image MI.
$$ {}\begin{aligned} P(i,j)&=\left\lfloor \left(x\_{\text{weight}}1\times {\text{MI}}(i,j-1)+x\_{\text{weight}}2\times {\text{MI}}(i,j+1)\right.\right. \\ &+y\_{{\text{weight}}1}\times {\text{MI}}(i-1,j)+y\_{{\text{weight}}2}\times {\text{MI}}(i+1,j) \\ &+xy\_{{\text{weight}}1}\times {\text{MI}}(i-1,j-1)\,+\,xy\_{{\text{weight}}2}\times {\text{MI}}(i+\!1,j\,+\,1) \\ &\left.+yx\_{{\text{weight}}1}\times {\text{MI}}(i-1,j+1)+y\_{{\text{weight}}2}\times {\text{MI}}(i+1,j\,-\,1)\right)\\ &\left./ {\text{weight}}\_{{\text{sum}}}\right\rfloor \\ {\text{where}} \\ {\text{weight}}\_{{\text{sum}}} &= x\_{{\text{weight}}1} + x\_{{\text{weight}}2} + y\_{{\text{weight}}1} + y\_{{\text{weight}}2} \\ &+ xy\_{{\text{weight}}1} + xy\_{{\text{weight}}2} + yx\_{{\text{weight}}1} + yx\_{{\text{weight}}2} \end{aligned} $$
Then, the 5×5 difference histogram e is generated from the difference values between the original image I and the estimated image P by Eq. (24).
Embedding algorithm
Assume T1 and T2 are two thresholds, where T1≥0 and T2<0. Before embedding, the T1 and T2 thresholds are decided appropriately by Section 3.4. Next, the embedding position are differentiated into allowing embedding or non-allowing embedding one by Section 3.3. If the position is allowing embedding, the message is embedded by Eq. (26). If the position is non-allowing embedding, the position is skipped. A key pseudo-random binary sequence generated by the encryption key is utilized to encrypt secret message w through exclusive-or operation, encryption data is generated h, as shown in Eq. (25).
$$ h=w \oplus {\text{key}} $$
The procedure of embedding message is described below:
$$ {}e'(i,j)= \left\{\begin{array}{ccc} e(i,j)+T1+1 & if & e(i,j)>T1\ {\text{and}}\ T1\geq0 \\ e(i,j)+T2 & if & e(i,j)<T2\ {\text{and}}\ T2<0 \\ 2e(i,j)+h & if & T2\leq e(i,j)\leq T1 \end{array}\right. $$
where h∈{0,1} is the current encrypted hidden bit.
After embedding the hidden data, the stego-image S is obtained as
Likewise, embed the three sets cross, star, and circle as above. Finally, the stego-image S and two threshold values, T1 and T2 are outputted.
Embedding selection by non-linear regression analysis and self-block standard deviation statistics
By Section 3.2, we can know that the difference histogram e and two thresholds T1, T2 are utilized to embed messages. In general, the difference histogram is concentrated on 0, thus the 0 position is embedded preferentially when embedding. Next, move current position to the left or right for embedding, as shown in Section 3.4. When embedding messages, not all positions can be embedded, but in order to comply with the rule of a reversible data hiding method, all positions must be shifted, which leads to the reduction of image quality after embedding. Therefore, we hope design a rule that can classify these positions into allowing embedding positions and non-allowing embedding positions, and then reduce the unnecessary shifting to improve the image quality after embedding.
First, we choose 30 nature images. Each image is passed through stage 1 to 4 of Section 3.1 to generate mirroring prediction image MPI and to calculate the standard deviation value of the current position σi,j by Eq. (28)
$$\begin{array}{@{}rcl@{}} \begin{aligned} \sigma_{i,j}&=\left|\sqrt{\frac{1}{8}\sum_{(i,j)\in\omega}[{\text{MPI}}_{i,j}-\overline{{\text{MPI}}}_{i,j}]^{2}} \right| \\ {\text{where}} \\ \overline{{\text{MPI}}}_{i,j}&=\frac{1}{8}\sum_{(i,j)\in\omega}{\text{MPI}}_{i,j} \\ \omega &= \left\{(i-1,j-1), (i+1,j+1), (i-1,j+1),(i+1,j-1), \right.\\ & \left.(i-1,j), (i+1,j), (i,j-1), (i,j+1)\right\} \end{aligned} \end{array} $$
Next, calculate the difference histogram e by the stage 5 of Section 3.1. The two thresholds T1=4, T2=− 4 are chosen to calculate the probability of the current position that difference histogram e(i,j) is T2≤e(i,j)≤T1 when standard deviation value is σ, and that is the probability of embedding EP(σ), as shown in Eq. (29)
$$\begin{array}{@{}rcl@{}} {\text{EP}}(\sigma) &=& \frac{{\text{EC}}(\sigma)}{{\text{EC}}(\sigma)+{\text{NEC}}(\sigma)} \\ {\text{where}}\ 0\leq \sigma \leq 49 \end{array} $$
Where EC is the embedding capacity, and NEC is the non-embedding capacity. Figure 6 indicates the histogram of the embedding probability, where x axis is the size of the standard deviation value σ, and y axis is the size of the embedding probability EP. We can find that when the lower the standard deviation leads to the higher probability of embedding, it represents the position is in a smooth region, thus it is easy to predict. Otherwise, this position is in a complex area, so it is difficult to predict. We set a threshold th, when σ(i,j)≤th, the position is used to embedding the message h by Eq. (26), otherwise, the position is skipped. During embedding, the embedding rage T1=4 and T2=− 4 are applied. We also utilize the 30 nature images to do the embedded statistics, where threshold th is 2, 4, 6, 8, 12, 16, 20, and 24. When the threshold th is the same and embedding rage T1=0∼4, T2=0∼− 4, we count the relation between the PSNR and the embedding capacity, as shown in Fig. 7, where the label origination denotes the threshold is not applied when embedding. Figure 7 shows that the best embedding threshold is different in different embedding capacity. It also shows that the image quality is really improved after embedding when the embedding threshold is used. We use non-linear regression analysis method to predict a quadratic curve function in each embedding threshold th, as shown in Eq. (30), where threshold th = 2,4,6,8,12,16,20,and24 and x is the embedding capacity.
$$ y({\text{th}},x)=a_{{\text{th}}}+b_{{\text{th}}}x+c_{{\text{th}}}x^{2} $$
Histogram of the embedding probability
The PSNR versus the capacity curves of using the different thresholds
Figure 8 shows the relation figure of the embedding capacity and the quality of the image when the threshold th = 12. The solid line is the line graph after statistics actually and the dotted line is the quadratic curve graph by using the non-linear regression analysis. Therefore, we can generate eight quadratic curve functions.
The PSNR versus the capacity curves of using the 12 threshold and prediction function
Testing stage
Before embedding, if this image is the type that has many edges, the amount of the small standard deviation values is relatively small, such as standard deviation values = 1, 2, and 3. It would cause some problems like the amount of the embedding is limited or the embedding range is increased too much, which reduces the quality of the embedding. In order to avoid this special case, after generating mirroring prediction image MPI, count the amount of the current standard deviation value σi,j in advance, and then set the largest standard deviation value as initial value init_th.
Next, employ the capacity of the embedding message x to find the best threshold best_th by Eq. (31).
$$ {\text{best\_{th}}}=\arg\max_{{\text{th}}} y({\text{th}},x) $$
If best_th<init_th, the best threshold best_th=init_th. If the best threshold best_th≥init_th, the best_th is not changed. Finally, we can utilize the best threshold best_th to generate the best decision and to differentiate between embedding region and non-embedding region, and then needless shifting is reduced.
Automatic embedding range decision
In Sections 3.1 and 3.3, our method needs to decide the embedding rage T1 and T2. Therefore, we propose a method that can utilize the size of the embedding messages to generate the best range T1 and T2 automatically to achieve the best quality of the embedding, as show in Figs. 9 and 10.
The flow diagram of generating initial embedding range
The flow diagram of generating the best embedding range
The proposed methodology is described in two stages:
Generating an initial embedding range
First, input the embedding messages x and the best_th be generated by Section 3.3. Next, initialize F_T1=0, F_T2=0 and D=R, and then employ F_T1, F_T2 and th to embed messages by the embedding method of Section 3.2. The embedding range is F_T1 and F_T2, while the amount that can be embedded is less than the capacity of embedded messages x, the embedding range expands to the left or right. In contrast, while the amount can be embedded is more than or equal to the capacity of embedded messages x, F_T1, F_T2, and D are outputted, where D is used to judge left or right for expansion. The aim is, when increasing the range, it would balance expand to the left or right from center to achieve the best image quality, as shown in Fig. 9.
Generating the best embedding range
First, input the F_T1, F_T2 and D that be generated from stage 1 and the best_th that be generated from Section 3.3. We use the best_th to let the embedding position differentiate between allowing embedding and non-allowing embedding. Next, it is expanded to the left or right from center balanced again to generate another embedding range S_T1 and S_T2. We compare the image quality of the embedding range F_T1, F_T2 with the image quality of the embedding range S_T1, S_T2. Sometimes, it would add the total amount of the small standard deviation values by increasing the embedding range. This will increase the success rate of judging the embedded position and the non-embedded position, thereby increases the image quality after embedding. However, due to this condition, the condition would probably increase the unnecessary shifting to reduce the image quality after embedding. Therefore, we compare the image quality after embedding these two kinds of status and find the best method to output embedding range, as shown in Fig. 10.
Figure 11 shows the example of embedding hiding messages into a difference image. Assume x is 20,000 bits, and we get the results of 8 quadratic curve functions by Eq. (31), as follows:
th2:y=53.5,th4:y=55.3,th6:y=53.8,th8:y=53,
Example of hiding data into an image of 3 x 3 pixels
th12:y=52.2,th16:y=51.8,th20:53.6,th24:y=53.4
among them the max y is th4, so best_th = 4.
If the embedding position's standard deviation value σ≤ 4, the position is allowing embedding position, otherwise it is non-allowing embedding position. Assume a message h=0101. σ(1)≤ 4, this position is allowing embedding position. We can find that the position can be an embedded message when − 1≤e≤1 by Eq. (26); otherwise, it cannot be an embedded message, but it still needs to be shifted. Thus, e(1) = 2 cannot be an embedded message, but we still shift it. We can calculate e′(1) = 4 by Eq. (26). Since σ(2)≤4, it is an allowing embedding position. e(2) = − 1 means it can be embedded message h(1) = 0, then we can calculate e′(2) = − 2 by Eq. (26). Similarly, σ(3)≤4 is an allowing embedding position, e(3)=0 can be embedded message h(2) = 1, then e′(3) = 1. σ(4) is an allowing embedded position, the position of e(4) cannot be an embedding message, thus e′(4) = 5. σ(5) > 4, so it is a non-allowing embedding position, then e'(5) = 4 and it needs no shifting. e′(6) = 2 is an allowing embedding position; it can be an embedded message h(3) = 0, then e′(6) = 2. σ(7) is a non-allowing embedding position, then e′(7) = 0 and it needs no shifting. σ(8) and σ(9) are allowing embedding positions, the position of e(8) cannot be an embedded message, then e′(8) = − 3. The position of e(9) can be an embedded message h(4) = 1, then e′(9) = − 1. Finally, the embedded difference image e′ is generated.
Extracting and reversing algorithm
The procedure of message extraction and recovery are described below:
Divide 5×5 stego-image S into 4 groups: square, cross, star, and circle. We define the 4 groups as G1, G2, G3, G4, respectively. We only discuss G4 in this sub-section because G3, G2, G1 are the same cases.
Mirror the stego-image S into a 7×7 mirror image MS.
Hidden the G4 as missing image, and the four neighboring pixels are utilized to predict center pixel, and then a 5×5 prediction image PS is generated.
Mirror prediction image PS into a 9×9 mirror prediction image MPS.
Calculate the weights of the eight neighboring pixels. Then a 5×5 stego-estimated image P′ is generated.
A 5×5 difference histogram e′ is generated by Eq. (32).
$$ e'(i,j)=S(i,j)-P'(i,j) $$
The embedding position differentiates between allowing embedding and non-allowing embedding by Section 3.3. If the position is allowing embedding, the hiding bit h is extracted by Eq. (33). If the position is non-allowing embedding, the position is skipped.
$$ h=e'(i,j)\ {\text{mod}}\ 2 \qquad if \quad 2\times T2 \leq e'(i,j) \leq 2\times T1 + 1 $$
The key pseudo-random binary sequence is utilized to decrypt h through exclusive-or operation to get original secret message w.
If the position is allowing embedding, original error prediction e(i,j) is obtained by Eq. (34). If the position is non-allowing embedding, e(i,j)=e′(i,j).
$$ {}e(i,j)\,=\, \left\{\!\!\begin{array}{ccc} e'(i,j)-T1-1 & {\text{if}} & e'(i,j)>2\times T1+1 \\ e'(i,j)-T2 & {\text{if}} & e'(i,j)<2\times T2 \\ \lfloor e'(i,j)/2 \rfloor & {\text{if}} & 2\times T2 \leq e'(i,j) \leq 2\!\times\! T1 + 1 \end{array}\right. $$
$$ I(i,j)=e(i,j)+P(i,j) $$
Figure 12 shows the example of message extraction and recovery image. Assume x is 20,000 bits, the best_th = 4 can be calculated by Eq. (34), and T1=1, T2 = − 1. σ(1) and σ(2) are less than or equal to best_th, e′(1) and e′(2) are allowing embedding positions. We can find that − 2≤e′≤3 is a position of the embedded message by Eq. (34). Otherwise, it is a position of the non-embedded message. Therefore, e′(1) = 4, the position is a position of the non-embedded message, then we can calculate e(1) = 2 by Eq. (34). e′(2) = − 2 is a position of the embedded message, it can be calculated h′(1) = 0 by Eq. (33) and it can be recovered e(2) = − 1 by Eq. (34). Similarly, e′(3) is an allowing embedding position, and it is also a position of the embedded message, then h′(2) = 1, e(3) = 0. σ(4) is an allowing embedding position, and e′(4) is a position of the non-embedded message, then e(4) = 3. σ(5) > 4 is a non-allowing embedding position, then e(5) = 4, it need not shifting. σ(6) is a allowing embedding position, e′(6) is a position of the embedded message, then h′(3) = 0, e(6) = 1. σ(7) is a non-allowing embedding position, then e(7) = 0, it needs no shifting. σ(8) and σ(9) are allowing embedding positions, e′(8) is a position of the non-embedded message, e(8) = − 2. e′(9) is a position of the embedded message, then h′(4)=1, e(9) = − 1. Finally, the message h′=0101 and the reduced difference image e can be generated.
Example of extraction and recovery from a processed image of 3 x 3 pixels
Overflow and underflow problem
A stego-image S is generated from Section 3.2. If the pixel is outside of 0∼255, it is called overflow or underflow. It cannot recover after embedding. Therefore, in embedding stage, we must consider this problem. This study uses the solution proposed by [13]. It is described below:
Construct the m×n location map L, where m, n is the length and width of the original image I, respectively. Then, set all the positions L(i)=1.
If embedding positions e(i,j) is [1,254], set L(i)=0 and embedding message. Otherwise, set L(i)=1 and switch into the next embeddable position.
Encode the location map L by the lossless compression.
Record the least significant bits of first 2⌈log2(m×n)⌉+LS image pixels where LS is the length of the compressed location map L.
During decoding processing, first the compressed location map L is reconstructed from 2⌈log2(m×n)⌉+LS image pixels of marked image. Then, the original location map is further generated by lossless decompression. Finally, the secret message is extracted and the host image is recovered.
Experimental results and discussions
In this section, the proposed method was compared with five methods, Kim et al. (2009) [9], Sachnev et al. (2009) [12], Luo et al. (2011) [10], Zhao et al.(2011) [11],and Rad et al. (2016) [13]. In the experiments, all standard 512×512 grayscale images were served as test images, including Baboon, Lena, Peppers, Elaine, Boat, and Barbara from the USC-SIPI standard testing database [29] and 1000 natural images that we collected. The proposed method was implemented using MATLAB Version R2012a on Intel Core i5 2.5 MHz with 8 G of memory. In the prediction stage, after deciding the embedding range, the two parameters Weight and Coe were used, as shown in Table 1. In the embedding selection stage, the non-linear regression analysis was applied to estimate eight quadratic curve functions; these functions are as follows:
$$\begin{aligned} {\text{th}}= 2:\quad y = -~7E-09x^{2} - 0.0005x + 66.399 \\ {\text{th}}= 4:\quad y = -~2E-10x^{2} - 0.0002x + 59.458 \\ {\text{th}}= 6:\quad y = 3E-10x^{2} - 0.0002x + 57.761 \\ {\text{th}}= 8:\quad y = 3E-10x^{2} - 0.0002x + 56.96 \\ {\text{th}}=12:\quad y = 3E-10x^{2} - 0.0002x + 56.171 \\ {\text{th}}=16:\quad y = 3E-10x^{2} - 0.0002x + 55.771 \\ {\text{th}}=20:\quad y = 3E-10x^{2} - 0.0001x + 55.511 \\ {\text{th}}=24:\quad y = 2E-10x^{2} - 0.0001x + 55.326 \end{aligned} $$
Table 1 Optimum results in different thresholds
Prediction difference histograms comparison
In this sub-section, the proposed prediction method was compared with five methods. It can be found that the difference histograms that we proposed the prediction method in Boat image and Peppers image are more concentrated and the peaks are higher than other methods. The prediction performance is better especially in a smooth image, as shown in Fig. 13. In general, the more concentrated the different histogram is and the higher the peak is, the more accurate the prediction result is. Therefore, our method can reduce the probability of needless shifting to get huge embedding capacity and good embedding quality.
The prediction error histograms. a Boat image. b Peppers image
Comparison hiding rate versus image quality
In this sub-section, the embedding capacity and image quality of proposed embedding method was compared with other five methods. In general, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) for embedding capacity are two kinds of performance indicators. The bigger the value of PSNR, the smaller the image distortion rate, where PSNR is defined as Eq. (36).
$$ {}{({\text{dB}}})~=~10\log_{10}\frac{255^{2}}{\frac{1}{m\times n}\sum_{x~=~0}^{m-1}\sum_{y~=~0}^{n-1}\bigl[X(x,y)-X'(x,y)\bigr]} $$
Where X(i,j) is an original image, and X′ is a cover image.
The SSIM index is calculated on various windows of an image. The measure between two images x and y of common size m×n is:
$$ {\text{SSIM}}(x,y)=\frac{(2\mu_{x}\mu_{y}+c_{1})(2\sigma_{xy}+c_{2})} {(\mu_{x}^{2}+\mu_{y}^{2}+c_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+c_{2})} $$
Where x is an original image, and y is a cover image, μx is the average of x, μy is the average of y, \(\sigma _{x}^{2}\) is the variance of x, \(\sigma _{y}^{2}\) is the variance of y, σxy is the covariance of x and y, c1=(k1L)2 and c2=(k2L)2 are two variables to stabilize the division with weak denominator, L is the dynamic range of the pixel-values, k1 = 0.01 and k2 =0.03 small constants near zero [31]. The value of SSIM index belongs to [0,1]. When the two images are identical, the value of the SSIM similarity is 1. The capacity of cover image is defined as Eq. (38).
$$ {\text{Capacity}}=\frac{{\text{hidden}}\_{\text{bits}}}{m\times n} $$
Where hidden_bits is the total number of hidden bits, and m and n represent the length and width, respectively.
In our experiments, it is verified that the hidden message can be extracted and the original image can be reconstructed by our method. We used six test image to draw the curve graph of the PSNR and the embedding capacity, as shown in Fig. 14. It can be found that at the same capacity, our proposed RDH algorithm achieve the best image quality among the six RDH algorithms. Therefore, it can demonstrate the embedding performance can be improved with our method. In our experiments, it is verified that the hidden message can be extracted and the original image can be reconstructed by our method. Figure 14 shows the PSNR and the embedding capacity that generated by five RDH algorithms and our method for the test images: Baboon, Lena, Peppers, Elaine, Boat, and Barbara.
The PSNR versus the capacity curves of the seven compared RDH algorithms for the test images. a Baboon. b Lena. c Peppers. d Elaine. e Boat. f Barbara
In addition, we make the statistic of the embedding capacity and PSNR and SSIM with 1000 natural images that we collected, and then the embedding capacity, PSNR and SSIM are averaged to draw a curve graph, as shown in Fig. 15. It can be found that our proposed method can more improve the embedding capacity and embedding quality more than the others.
The PSNR and SSIM versus the capacity curves of the six compared RDH algorithms for the 1000 images. a PSNR. b SSIM
Parameter estimation
In this study, the two parameters Weight and Coe were utilized in prediction stage, referring to Section 3.1. We utilized 30 nature images and measured the optimal parameters of T1 and T2 at different embedding threshold values through an optimal algorithm, particle swarm optimization (PSO) [30]. A definition of the objective function of the optimal algorithm is provided in Eq. (39).
$$ \max\sum_{{\text{Image}}~=~1}^{30}{\text{Capacity(Image)}} $$
Where Image is a training image. The result of the optimal parameters we obtained was given in Table 1. Figure 16a shows the plot of the relation between generation number and fitness value, where the horizontal axis is the number of generation number and the vertical axis is the fitness value. Figure 16b shows the plot of generation number and coefficient value, where the horizontal axis is the number of generation number, the vertical axis is the coefficient value, W is the coefficient Weight and C is the coefficient Coe.
Generation number comparison (a) vs. fitness value; (b) vs. coefficient value
Performance comparison of the four neighborhood and eight neighborhood with our method
Before the prediction stage, all pixels of this image are classified. Therefore, we compare the four neighboring pixels with the eight neighboring pixels at the same conditioning embedding stage. First, we divide all pixels of the image into two groups and utilized the four neighboring pixels to calculate and predict. Then, all pixels of the image are divided into four groups, and we employed the eight neighboring pixels to calculate and predict. We make the statistics of the embedding capacity and PSNR with six test images, including Baboon, Lena, Peppers, Elaine, Boat, and Barbara, and then the embedding capacity and PSNR are averaged to draw a curve graph, as shown in Fig. 17. It can be found that using the eight neighboring pixels can more improve the embedding capacity and embedding quality by than the four neighboring pixels.
Comparison of embedding after sorting and selection embedding
In our embedding stage, the best standard deviation value threshold will be generated to achieve optimal embedding performance. Therefore, we compare below two embedding methods. One is the embedding after sorting; it is embedded after sorting by the size of the standard deviation values. The other is the selection embedding; it first makes to sort by the size of the standard deviation value, next if the value of the current position is less than the standard deviation value threshold, and the position will be embedded messages; otherwise, there will be no change. We make the statistics of the embedding capacity and PSNR with six test images, including Baboon, Lena, Peppers, Elaine, Boat, and Barbara, and then the embedding capacity and PSNR to draw a curve graph, as shown in Fig. 18. It can be found using the selection embedding method can improve the embedding capacity and the embedding quality of this system, in particular, when the embedding capacity is small. We also compare with three test images, including Baboon, Lena, and Peppers, as shown in Tables 2, 3, and 4, where label Proposed is embedding from left to right, from top to bottom. Label Proposed(sort) is embedded after sorting with standard deviation value. Label Proposed (selection) is sorted by the size of the standard deviation value. Next, if the value of the current position is smaller than the standard deviation value threshold, we embed messages in the current position; otherwise, there will be no change. We find that sorting with standard deviation value in the same embedding condition improve the embedding capacity. If we further process images by selection embedding, we can further improve the embedding capacity. Besides, the performance is better for complex images than smooth images, in particular, when the embedding capacity is small.
Performance comparison of the original method and embedding selection method
Table 2 Comparison of PSNR and SSIM for our schemes on Baboon image
Table 3 Comparison of PSNR and SSIM for our schemes on Lena image
Table 4 Comparison of PSNR and SSIM for our schemes on Boat image
Comparison of automatic embedding range decision
In our system, we can find the best embedding range by the size of the embedding capacity for the user. We used four test images, Baboon, Lena, Boat, and Peppers, and the embedding capacities are 10,000 and 20,000, respectively, as shown in Table 5 and 6, where selection 1 is the quality of PSNR generated in stage one, and selection 2 is the quality of PSNR generated in stage two. It can be found that the embedding range can be expanded once to obtain better image quality when the amount of embedding is small. However, the expansion of embedding range is not absolutely better when the amount of embedding is large. Therefore, it compares the image qualities of two embedding ranges in the second stage, and then it would generate the optimal embedding range.
Table 5 Comparison of [PSNR,SSIM] for two stages (messages = 10,000)
Comparison of the executing-time performance
The execution-time performance comparison among the concerned six RHD algorithms, as shown in Table 7. We can find that method Kim et al. (2009) proposed has the best execution-time performance, because it only used a simple shifting. Rad et al. (2016) proposed method has the worst execution-time performing, because the embedded rule processing is applied. Then, our proposed methods, Label Proposed, needs a little bit more embedding time due to the following two main factors. One, the original image is divided into four groups. Second, multi-directional gradient prediction is calculated. Label Proposed (sort) needs more time, because it is embedding after sorting with standard deviation value. Finally, Label Proposed (select) is determining whether the position is suitable for embedding messages. If it is not suitable, it needs to determine the next position, so it also affects a little processing time.
Table 7 The execution-time comparison among the concerned five RHD algorithms and our methods
In this paper, we proposed a new multi-directional gradient prediction method to generate more accurate prediction result. Next, in embedding stage, according to the embedding capacity of information, we generate the best decision based on non-linear regression analysis, which can differentiate between embedding region and non-embedding region, and then needless shifting was reduced. Finally, we employ automatic embedding range decision with sorting by the amount of regional variance. It can be prioritized to embed for the region which was easy to predict, and the quality of the image was improved after embedding. The experimental results showed our difference histograms of the proposed prediction method are more concentrated and their peak are higher than other methods. In the selection of embedding method, experimental results indicated our method can improve embedding performance, especially when the image is a complex or the amount of embedding is small. Moreover, the experimental results also demonstrated that the embedding capacity of our proposed the method outperforms other methods with less distortion. In the future, we hope to apply the proposed method to JPEG reversible data hiding and encrypted image reversible data hiding.
Test image from Standard Image Data-BAse (SIDBA) is available online at http://www.ess.ic.kanagawa-it.ac.jp/app_images_j.html.
Difference expansion
EB:
Expansion based
GA:
HS:
Histogram shifting
LSB:
Least significant bit
PDE:
Partial differential equation
RDH:
Reversible data hiding
RRBE:
Reserving room before encryption
VRAE:
Vacating room after encryption
J. Fridrich, M. Goljan, R. Du, in SPIE 2001. Invertible authentication (International Society for Optics and Photonics, SPIE, 2001), pp. 197–208. https://doi.org/10.1109/itcc.2001.918795.
J. Fridrich, M. Goljan, R. Du, Lossless data embedding-newparadigm in digital watermarking. EURASIP J. Appl. Signal Process.2:, 185–196 (2002).
MATH Google Scholar
M. U. Celik, G. Sharma, A. M. Tekalp, E. Saber, Lossless generalized-lsb data embedding. IEEE Trans. Image Process.14(2), 253–266 (2005). https://doi.org/10.1109/tip.2004.840686.
M. U. Celik, G. Sharma, A. M. Tekalp, Lossless watermarking for image authentication: A new framework and an implementation. IEEE Trans. Image Process.15(4), 1042–1049 (2006). https://doi.org/10.1109/tip.2005.863053.
J. Tian, Reversible data embedding using a difference expansion. IEEE Trans. Circuits Syst. Video Technol.13:, 890–896 (2003).
M. Alattar, in Int. Conf. Image Process. Reversible watermark using difference expansion of triplets (IEEE, 2003), pp. 501–504. https://doi.org/10.1109/icip.2003.1247008.
M. Alattar, in Int. Conf. Image Process. Reversible watermark using difference expansion of quads (IEEE, 2004), pp. 377–380. https://doi.org/10.1109/icassp.2004.1326560.
Z. Ni, Y. Q. Shi, N. Ansari, W. Su, Reversible data hiding. IEEE Trans. Circuits Syst. Video Technol.16:, 354–362 (2006).
K. S. Kim, M. J. Lee, H. Y. Lee, H. K. Lee, Reversible data hiding exploiting spatial correlation between sub-sampled images. Pattern Recog.42:, 3083–3096 (2009).
H. Luo, F. X. Yu, H. Chen, Z. L. Huang, P. H. Wang, Reversible data hiding based on block median preservation. J. Inf. Sci.181:, 308–328 (2011).
Z. Zhao, H. Luo, Z. M. Lu, J. S. Pan, Reversible data hiding based on multilevel histogram modification and sequential recovery. Int. J. Electron. Commun.65:, 814–826 (2011).
V. Sachnev, H. J. Kim, J. Nam, S. Suresh, Y. Q. Shi, Reversible watermarking algorithm using sorting and prediction. IEEE Trans. Circ. Syst. Video Technol.19:, 989–999 (2009).
R. M. Rad, K. Wong, J. M. Guo, Reversible data hiding by adaptive group modification on histogram of prediction errors. Signal Process.125:, 315–328 (2016).
R. M. Rad, K. Wong, J. M. Guo, A unified data embedding and scrambling method. IEEE Trans. Image Process.23:, 1463–1475 (2014).
MathSciNet Article Google Scholar
C. Qin, C. C. Chang, Y. H. Huang, L. T. Liao, An inpainting-assisted reversible steganographic scheme using a histogram shifting mechanism. IEEE Trans. Circ. Syst. Video Technol.7:, 1109–1118 (2013).
X. Li, L. B., B. Yang, T. Zeng, General framework to histogram-shifting-based reversible data hiding. IEEE Trans. Image Process.6:, 2181–2191 (2013).
H. J. Hwang, S. H. Kim, H. J. Kim, Reversible data hiding using least square predictor via the lasso. EURASIP J. Image Video Process.42:, 1–12 (2016).
J. N. J. Wang, X. Zhang, Y. Q. Shi, Rate and distortion optimization for reversible data hiding using multiple histogram shifting. IEEE Trans. Cybernet.47:, 315–326 (2017).
F. Huang, K. X. Qu, J. Huang, Reversible data hiding in jpeg images. IEEE Trans. Circ. Syst. Video Technol.26:, 1610–1621 (2016).
D. Hou, H. Wang, W. Zhang, Reversible data hiding in jpeg image based on dct frequency and block selection. Signal Process.148:, 41–47 (2018).
X. C. Cao, X. X. Wei, D. Meng, X. J. Guo, High capacity reversible data hiding in encrypted images by patch-level sparse representation. IEEE Trans. Cybern.46:, 1132–1143 (2016).
K. D. Ma, X. F. Zhao, N. H. Yu, F. H. Li, Reversible data hiding in encrypted images by reserving room before encryption. IEEE Trans. Inf. Forensics Secur.8:, 553–562 (2013).
X. Zhang, Z. Qian, G. Feng, Y. Ren, Efficient reversible data hiding in encrypted images. J. Vis. Commun. Image Represent.25:, 322–328 (2014).
F. J. Huang, J. W. Huang, Y. Q. Shi, New framework for reversible data hiding in encrypted domain. IEEE Trans. Inf. Forensics Secur.11:, 2777–2789 (2016).
Z. X. Qian, X. P. Zhang, G. R. Feng, Reversible data hiding in encrypted images based on progressive recovery. IEEE Signal Process. Lett.23:, 1672–1676 (2016).
Z. X. Qian, X. P. Zhang, Y. L. Ren, G. R. Feng, Block cipher based on separable reversible data hiding in encrypted images. Multimedia Tools Appl.75:, 13749–13763 (2016).
C. Qin, Z. He, X. Luo, J. Dong, Reversible data hiding in encrypted image with separable capability and high embedding capacity. Inf. Sci.465:, 285–304 (2018).
C. Qin, W. Zhang, F. Cao, X. Zhang, C. C. Chang, Separable reversible data hiding in encrypted images via adaptive embedding strategy with block selection. Signal Process.153:, 109–122 (2018).
USC-SIPI Image Database. http://sipi.usc.edu/services/database/Database.html. Accessed Aug 2019.
J. Kennedy, R. Eberhart, in ICNN'95 - International Conference on Neural Networks. Particle swarm optimization (IEEE, 1995).
Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process.13:, 600–612 (2004).
We would like to thank several anonymous reviewers and the editor for their comments.
Kuo-Ming Hung contributed equally to this work.
Department of Information Management, Kainan University, Taoyuan, 338, Taiwan
Kuo-Ming Hung
Department of Electrical and Computer Engineering, Tamkang University, New Taipei, 251, Taiwan
Chi-Hsiao Yih, Cheng-Hsiang Yeh & Li-Ming Chen
Chi-Hsiao Yih
Cheng-Hsiang Yeh
Li-Ming Chen
HKM invented the proposed idea. CHY participated in the statistical theory to back up and support the proposed idea. HKM, CHY2, and LMC helped in the investigation. CHY2 and LMC wrote the original draft. HKM, CHY1, CHY2, and LMC contributed to the writing, review, and editing of CHY1, CHY2 participated in the design and coordination of paper and finish manuscript of the paper. The authors read and approved the final manuscript.
Correspondence to Kuo-Ming Hung.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Hung, KM., Yih, CH., Yeh, CH. et al. A high capacity reversible data hiding through multi-directional gradient prediction, non-linear regression analysis and embedding selection. J Image Video Proc. 2020, 8 (2020). https://doi.org/10.1186/s13640-020-0495-7
Reversible data hiding (RDH)
Non-linear regression analysis
Multi-directional variation prediction | CommonCrawl |
\begin{document}
\title[Submaximally symmetric c-projective structures]{Submaximally symmetric \\ c-projective structures}
\author{Boris Kruglikov, Vladimir Matveev, Dennis The}
\date{}
\address{BK: \ Institute of Mathematics and Statistics, University of Troms\o, Troms\o\ 90-37, Norway. \quad E-mail: {\tt [email protected]}. \newline
\hphantom{W} VM: \ Institut f\"ur Mathematik, Friedrich-Schiller-Universit\"at, 07737, Jena, Germany. \quad Email: {\tt [email protected]}\newline
\hphantom{W} DT: \ Mathematical Sciences Institute, Australian National University, ACT 0200, Australia. \quad E-mail: {\tt [email protected]}\newline
\hphantom{W} DT: \ Fakult\"at f\"ur Mathematik, Universit\"at Wien, Oskar-Morgen\-stern-Platz 1, 1090 Wien, Austria. \quad E-mail: {\tt [email protected]}} \keywords{Almost complex structure, complex minimal connection, c-projective structure, submaximal symmetry dimension, pseudo-K\"ahler metric.} \subjclass[2010]{32Q60, 53C55, 53A20, 53B15, 58J70}
\begin{abstract} C-projective structures are analogues of projective\linebreak structures in the almost complex setting. The maximal dimension of the Lie algebra of c-projective symmetries of a complex connection on an almost complex manifold of $\C$-dimension $n>1$ is classically known to be $2n^2+4n$. We prove that the submaximal dimension is equal to $2n^2-2n+4+2\,\delta_{3,n}$.
If the complex connection is minimal (encoded as a normal parabolic geometry), the harmonic curvature of the c-projective structure has three components and we specify the submaximal symmetry dimensions and the corresponding geometric models for each of these three pure curvature types. If the connection is non-minimal, we introduce a modified normalization condition on the parabolic geometry and use this to resolve the symmetry gap problem. We prove that the submaximal symmetry dimension in the class of Levi-Civita connections for pseudo-K\"ahler metrics is $2n^2-2n+4$, and specializing to the K\"ahler case, we obtain $2n^2-2n+3$. This resolves the symmetry gap problem for metrizable c-projective structures.
\end{abstract}
\maketitle
\section*{Introduction and Main Results}
Let $\nabla$ be a linear connection on a smooth connected almost complex manifold $(M^{2n},J)$ of $\C$-dimension $n\geq2$. We will assume throughout $\nabla$ is a {\it complex connection\/}, which means $\nabla J=0$ $\Leftrightarrow$ $\nabla_X(JY)=J\nabla_XY$ for all vector fields $X,Y\in\mathcal{D}(M)$. Every almost complex manifold has a complex connection, and the map $\nabla\mapsto\frac12(\nabla-J\nabla J)$ is a projection from the space of all connections to the space of complex connections.
The torsion $T_\nabla\in\Omega^2(M)\ot\mathcal{D}(M)$ of a complex connection $\nabla$ needs not vanish. Its total complex-antilinear part $T_\nabla^{--}\in\Omega^{0,2}(M)\ot\mathcal{D}(M)$ is equal to $\frac14N_J$, where
$$ N_J(X,Y)=[JX,JY]-J[JX,Y]-J[X,JY]-[X,Y]
$$ is the Nijenhuis tensor of $J$. In particular, for non-integrable $J$, the complex connection $\nabla$ is never symmetric. However the other parts of $T_\nabla$ can be set to zero by a choice of complex connection. There always exist {\it minimal\/} connections $\nabla$ characterized by $T_\nabla=T_\nabla^{--}$, see \cite{Lic}.
Recall that two (real) connections are projectively equivalent if their (unparametrized) geodesics $\gamma$, given by $\nabla_{\dot\gamma}\dot\gamma\in\langle\dot\gamma\rangle$, are the same (here $\langle Y\rangle$ denotes the linear span of $Y$ over $C^\infty(M)$). Thus equivalence $\nabla\sim\bar\nabla$ means $\nabla_XX-\bar\nabla_XX\in\langle X\rangle$ $\forall X\in\mathcal{D}(M)$, and any connection is (real) projectively equivalent to a symmetric one: $\nabla\simeq\nabla-\frac12T_\nabla$.
A natural and actively studied analogue of projective equivalence in the presence of a complex structure is c-projective equivalence (also known as h-projective or holomorph-projective equivalence \cite{Ta,Y,Mi}). Let us recall the basic definitions. A $J$-planar curve $\gamma$ is given by the differential equation $\nabla_{\dot\gamma}\dot\gamma\in\langle\dot\gamma\rangle_\C =\langle\dot\gamma,J\dot\gamma\rangle$. Reparame\-tri\-za\-tion does not change this property. Actually, by a reparametrization one can achieve $\alpha=0$ in the decomposition
$$ \nabla_{\dot\gamma}\dot\gamma=\alpha\dot\gamma+\beta J\dot\gamma,
$$ and then $\beta$ is invariant up to a constant multiple (in general, the function $I=(\alpha+\nabla_{\dot\gamma})(\beta^{-1})$ is an invariant of reparametrizations). Geodesics correspond to $\beta=0$ (singular value for $I$). As complex analogues of geodesics, $J$-planar curves are of considerable interest in complex and K\"ahler geometry, see e.g. \cite{Is,ACG,KiT,MR$_1$}.
Two pairs $(J,\nabla)$ on the same manifold $M$ (with $\nabla J=0$) are called {\it c-projectively equivalent\/} if they share the same class of $J$-planar curves. It is easy to show that the almost complex structure $J$ is restored up to sign by the c-projective equivalence,
and we will fix the structure $J$ (this does not influence the symmetry algebra)\footnote{By this reason we sometime talk of c-projective equivalence of complex connections $\nabla$ on the fixed almost complex background $(M,J)$.}. Thus we arrive to:
\begin{dfn} Two complex connections on an almost complex manifold $(M,J)$ are c-projectively equivalent $\nabla\sim\bar\nabla$ if they have the same $J$-planar curves, i.e.\ $\nabla_XX-\bar\nabla_XX\in\langle X\rangle_\C$ $\forall X\in\mathcal{D}(M)$. A {\it c-projective structure\/} is an equivalence class $(M,J,[\nabla])$.
\end{dfn}
This equivalence can be reformulated tensorially for (complex) connections $\nabla,\bar\nabla$ on $(M,J)$ with equal torsion $T_\nabla=T_{\bar\nabla}$ as follows: $\nabla\sim\bar\nabla=\nabla+\op{Id}\odot\Psi-J\odot J^*\Psi$ for some 1-form $\Psi\in\Omega^1(M)$. In other words, $\bar\nabla$ is c-projectively equivalent to $\nabla$ if and only if
$$ \bar\nabla_XY=\nabla_XY+\Psi(X)Y+\Psi(Y)X-\Psi(JX)JY-\Psi(JY)JX
$$ (notice that $T_\nabla=T_{\bar\nabla}$), see \cite{OT,Is,MS}.
We will show that it is possible to canonically fix the torsion within one c-projective class. For minimal connections the torsion is already canonical (=$\frac14N_J$), but in general a complex connection is not c-projec\-tive\-ly equivalent to a minimal one (in particular, to a symmetric complex connection, cf. the real case). We will demonstrate that the obstruction to finding a minimal connection in the c-projective class $[\nabla]$ is the following part of the torsion
\begin{align*} T_\text{traceless}^{-+}(X,Y)=&\,\tfrac14\bigl(T_\nabla(X,Y)+JT_\nabla(JX,Y)-JT_\nabla(X,JY)\\ &+T_\nabla(JX,JY)\bigr)-\tfrac1{2n}\bigl(\varsigma(X)Y+\varsigma(JX)JY\bigr),
\end{align*} where $\varsigma(X)=\frac12\op{Tr}\bigl(T_\nabla(X,\cdot)+JT_\nabla(JX,\cdot)\bigr)=\tfrac12(X^aT^b_{ab}+J^a_kX^kT^c_{ab}J^b_c)$. This invariant of c-projective structure is called $\kappa_\text{IV}$ in Section \ref{S4}, where we elaborate the general case (non-minimal connections), and prove an equivalence of categories bet\-ween c-projective structures and parabolic geometries of type $\op{SL}(n+1,\C)_\R/P$ with a modified normalization.
A vector field $v$ is called a {\it c-projective symmetry} if its local flow $\Phi^v_t$ preserves the class of $J$-planar curves. Equivalently, a c-projective symmetry is a $J$-holomorphic vector field $v$ such that its local flow transforms $\nabla$ to a c-projectively equivalent connection: $(\Phi^v_t)^*J=J$, $[(\Phi^v_t)^*\nabla]=[\nabla]$. The first equation can be re-written as $L_vJ=0$. The second equation, written as $L_v[\nabla]=0$, can be expressed in local coordinates, with the connection $\nabla$ given by the Christoffel symbols\footnote{Our index convention is that $\nabla_{\p_j}\p_k=\Gamma_{jk}^i\p_i$.} $\Gamma_{jk}^i$, as follows:
$$ \Omega^i_{jk}- \phi_j\delta^i_k- \phi_k\delta^i_j+ \phi_\alpha J^\alpha_j J^i_k+ \phi_\alpha J^\alpha_k J^i_j =0,
$$ where $\Omega_{jk}^i=L_v(\Gamma)^i_{jk}$ and $\phi_j=\frac1{2(n+1)}\Omega^i_{ji}$ (notice that this manifestly non-symmetric formula implies $L_vT_\nabla=0$). We use these equations in computing symmetries of the explicit models.
The space of c-projective vector fields forms a Lie algebra, denoted $\mathfrak{cp}(\nabla,J)$. It is well known (and we recall in the next section) that the maximal dimension of this algebra is equal to $2n^2+4n$, and this bound is achieved only if the structure is {\it flat\/}, i.e.\ c-projectively locally equivalent to $\C P^n$ equipped with the standard complex structure $J_\text{can}$ and the class of the Levi-Civita connection $\nabla^\text{FS}$ of the Fubini-Study metric. Indeed, the group of c-projective symmetries of this flat structure $(\C P^n,J_\text{can},[\nabla^\text{FS}])$ is $\op{PSL}(n+1,\C)$, and its Lie algebra is $\mathfrak{sl}(n+1,\C)$.
For many geometric structures the natural (and often nontrivial) problem is to compute the next possible/realizable dimension, the so-called {\it submaximal dimension\/}, of the algebra of symmetries, see \cite{E$_2$,Ko,K$_3$,KT} and the references therein.
For the algebra of (usual) projective vector fields the question was settled in \cite{E$_1$}. For c-projective vector fields the answer is as follows.
\begin{theorem}\label{Thm1} Consider a c-projective structure $(M,J,[\nabla])$. If it is not flat\footnote{That is in a neighborhood of at least one point of $M$ the c-projective structure is not locally equivalent to $(\C P^n,J_\text{can},[\nabla^\text{FS}])$.}, then $\dim\mathfrak{cp}(\nabla,J)$ is bounded from above by
$$ \mathfrak{S}=\left\{ \begin{array}{ll}2n^2-2n+4,& n\neq3,\\ 18,& n=3.\end{array} \right.
$$ and this estimate is sharp (= realizable).
\end{theorem}
We will show that the dimensional bound $2n^2-2n+4$ is realizable via both non-minimal and minimal complex connections.
Let us now discuss the minimal case. By \cite{H,CEMN} the corresponding c-projective structures can be encoded as
{\it (regular\footnote{For $|1|$-graded geometries the regularity condition is vacuous; in particular it can be dropped for c-projective structures.}) normal parabolic geometries\/} of type $\op{SL}(n+1,\C)_\R/P$, we will recall the setup in the next section. The fundamental invariant of any regular normal parabolic geometry is its harmonic curvature $\kappa_H$, through which the flatness writes simply as $\kappa_H=0$. As will be discussed in the next section, for c-projective structures $(J,[\nabla])$ with minimal $\nabla$ the harmonic curvature has three irreducible components $\kappa_H=\kappa_{\rm{I}}+\kappa_{\rm{II}}+\kappa_{\rm{III}}$.
According to \cite{KT}, the submaximal dimension is attained when only one of the components of the curvature is non-zero (provided the universal upper bound is realized, see loc.cit.\ for the precise statement; in our case this condition is satisfied). Thus we can study a finer question, namely what is the maximal dimension of the algebra of c-projective vector fields, in the case the curvature is non-zero and has one of the types I-III. Let $\mathfrak{S}_i$ be the maximal dimension of the algebra $\mathfrak{cp}(\nabla,J)$ in the case $\nabla$ is not flat, and its curvature has fixed type $i$.
\begin{theorem}\label{Thm2} For c-projective structures $(M,J,[\nabla])$, associated with minimal complex connections $\nabla$, the submaximal dimension of $\mathfrak{cp}(\nabla,J)$ within a fixed curvature type is equal to
$$ \mathfrak{S}_{\rm{II}}=2n^2-2n+4.
$$
$$ \mathfrak{S}_{\rm{I}}=\left\{ \begin{array}{ll}2n^2-4n+10,& n>2,\\ 6,& n=2.\end{array} \right. \quad \mathfrak{S}_{\rm{III}}=\left\{ \begin{array}{ll}2n^2-4n+12,& n>2,\\ 8,& n=2.\end{array} \right.
$$
\end{theorem}
Let us list the first values of the submaximal dimensions:
\begin{center}
\begin{tabular}{c||c|c|c|c|c|c} SubMax Dim & $n=2$ & $n=3$ & $n=4$ & $n=5$ & $n=6$ & \dots\\ \hline Type {\rm{I}} & 6 & 16 & 26 & 40 & 58 & \dots \\ \hline Type II & 8 & 16 & 28 & 44 & 64 & \dots \\ \hline Type III & 8 & 18 & 28 & 42 & 60 & \dots \end{tabular}
\end{center}
Sharpness in the dimension estimates will be obtained by exhibiting the explicit models and their symmetries, and we get $\mathfrak{S}=\max\mathfrak{S}_i$.
\begin{cor}\label{Cor} Consider a complex manifold $(M,J)$ with a complex symmetric connection $\nabla$. If the c-projective structure $(J,[\nabla])$ is not flat, then its symmetry dimension does not exceed\/ $\mathfrak{S}_0=2n^2-2n+4$ and this upper bound is realizable.
\end{cor}
On the way to proving Theorem \ref{Thm2} we establish two general results about the symmetry gap problem for {\it real\/} parabolic geometries (Propositions \ref{P:lw-vec} and \ref{P:PR}), which generalize some results of \cite{KT} and are of independent interest.
An important problem in projective differential geometry is to determine if a given projective connection is metrizable.
In the c-projective case, the corresponding problem is to determine if the structure $(J,[\nabla])$ is represented by the Levi-Civita connection $\nabla^g$ of a pseudo-K\"ahler\footnote{By this we mean (throughout the paper) both indefinite and definite cases.} structure $(g,J)$, where $g$ is a metric and $J$ a complex structure (related by $J^*g=g$, $\nabla^gJ=0$; in particular, $J$ is integrable). For such structures we also compute the submaximal symmetry dimension.
\begin{theorem}\label{Thm3} For a K\"ahler structure $(M,g,J)$ of non-constant holomorphic sectional curvature $\dim\mathfrak{cp}(\nabla^g,J)\le 2n^2-2n+3$. This bound is realized by $(M=\C P^1\times\C^{n-1},J=i)$ with its natural K\"ahler metric.
For a pseudo-K\"ahler structure $(M,g,J)$ of non-constant holomorphic sectional curvature we have: $\dim\mathfrak{cp}(\nabla^g,J)\le 2n^2-2n+4$. This estimate is sharp in any signature $(2p,2(n-p))$, $0<p<n$.
\end{theorem}
Thus the submaximal symmetry dimension $\mathfrak{S}_0=2n^2-2n+4$ from the above corollary is realizable by a pseudo-K\"ahler metric. In fact, the submaximal c-projective structure with complex $J$ and symmetric connection $\nabla$ preserving $J$ and having curvature type II is unique and metrizable (compare this to the real projective case \cite{KM}, where the submaximal structure is not metrizable). The corresponding pseudo-K\"ahler metric(s), given by formula (\ref{subMKh}), will be described in detail.
It seems plausible that the above result about K\"ahler structures extends to a larger space of c-projective structures associated to almost Hermitian pairs $(g,J)$. These are given by $\nabla$ obtained uniquely from the conditions: $\nabla g=0$, $\nabla J=0$. We conjecture that all submaximal c-projective structures in this class are associated to K\"ahler structures.
In Appendix \ref{S.A} we give a detailed account of how the submaximal model for an exceptional case (type III, $n=2$) is constructed. We discuss the uniqueness issue of the submaximal models in Appendix \ref{S.B}.
\section{C-projective structures: the minimal case.}\label{S1}
In this section we give the necessary background on c-projective equivalence of {\it minimal\/} complex connections $\nabla$ on an almost complex manifold $(M,J)$ of $\dim\!M=2\dim_\C\!M=2n$ ($n>1$).
Such c-projective structures on $2n$-dimensional manifolds are the underlying structures of regular {\it normal\/} parabolic geometries of type $G/P$, where $G=SL(n+1,\C)$ and $P$ is the subgroup that stabilizes a complex line $\ell\subset\C^{n+1}$; both $G$ and $P$ are to be regarded as {\it real\/} Lie groups.
We recall some basic setup, referring to \cite{CS,Y,H} for further details. The parabolic subgroup $P$ induces the Lie algebra gradation on the space of trace-free complex matrices:
\[
\g=\mathfrak{sl}(n+1,\C)_\R = \g_{-1}\oplus\overbrace{\g_0\oplus\g_1}^\mathfrak{p},
\] If $\ell$ is spanned by the first standard basis vector in $\C^{n+1}$, then
\com{
$$
\g_{-}=\left(\begin{array}{c|ccc}
0 & 0 & \cdots & 0 \\ \hline
* & 0 & \cdots & 0\\
\vdots & \vdots & \ddots & \vdots \\
* & 0 & \cdots & 0
\end{array} \right),\quad
\g_0=\left(\begin{array}{c|ccc}
* & 0 & \cdots & 0 \\ \hline
0 & * & \cdots & *\\
\vdots & \vdots & \ddots & \vdots \\
0 & * & \cdots & *
\end{array} \right),\quad \g_+=(\g_{-})^t.
$$ }
\[
\g_{-}= \left(\begin{array}{c|c}
0 & 0 \\ \hline
* & 0 \\
\end{array} \right), \quad
\g_0 = \left(\begin{array}{c|c}
* & 0 \\ \hline
0 & * \\
\end{array} \right), \quad
\g_{+}= \left(\begin{array}{c|c}
0 & * \\ \hline
0 & 0 \\
\end{array} \right),
\] using the blocks of size $1$ and $n$ along the diagonal. In fact, the gradation is induced by a (unique) grading element $Z \in \mathfrak{z}(\g_0)$, i.e.\ $\g_j$ is the eigenspace with {\em homogeneity} (eigenvalue) $j$ for $\operatorname{ad}_Z$. With the standard choice
$Z=\op{diag}(\frac{n}{n+1},\frac{-1}{n+1},\dots,\frac{-1}{n+1})$.
The fundamental invariant of any regular normal parabolic geometry is its harmonic curvature $\kappa_H$, whose vanishing (flatness) is the complete obstruction to local equivalence to the homogeneous model $G/P$. For c-projective structures $\kappa_H=0$ is equivalent to $(M,J,[\nabla])$ being locally isomorphic to $(\C P^n,J_\text{can},[\nabla^\text{FS}])$.
In this (flat) case only, dimension of the symmetry algebra $\mathfrak{cp}(\nabla,J)$ is equal $\dim_\R\g=2(n^2+2n)$. Otherwise the dimension is strictly smaller and we obtain the gap of dimensions: $\dim\g-\mathfrak{S}$.
The harmonic curvature $\kappa_H$ takes values in the space $\mathbb{V}=H^2_+(\g_{-},\g)$ consisting of all positive homogeneity components of the Lie algebra cohomology $H^2(\g_-,\g)$ with respect to the natural $\g_0$-action. For c-projective structures, $\mathbb{V}$ decomposes as a $\g_0$-module into irreducibles (irreps): $\mathbb{V}=\mathbb{V}_{{\rm I}}\oplus\mathbb{V}_{{\rm II}}\oplus\mathbb{V}_{{\rm III}}$ (here subscripts are mere numerations). Using the standard $(p,q)$-notation for the decomposition of tensors with respect to the almost complex structure $J$, we have\footnote{Below and throughout $A\odot B$ denotes the Cartan product of $A$ and $B$, i.e.\ the highest weight irreducible submodule of $A\ot B$ ("traces removed"). If both modules $A,B$ are complex then we can also form tensor/Cartan product over $\C$.} \cite{CEMN}:
$$ \mathbb{V}_{{\rm I}}=\left\{\begin{array}{ll} \La^{2,0}\g_{-}^*\odot_{\C}\mathfrak{sl}(\g_{-},\C), & n>2,\\ \La^{2,0}\g_{-}^*\odot_{\C}\g_{-}^*, & n=2;\end{array}\right.
$$
$$ \mathbb{V}_{{\rm II}}=\La^{1,1}\g_{-}^*\odot\mathfrak{sl}(\g_{-},\C); \qquad \mathbb{V}_{{\rm III}}=\La^{0,2}\g_{-}^*\ot_{\C}\g_{-}\simeq\La^2\g_{-}^*\ot_{\bar\C}\g_{-}.
$$ In the standard terminology, $\mathbb{V}_{{\rm I}}\oplus\mathbb{V}_{{\rm II}}$ is the space of curvatures, and $\mathbb{V}_{{\rm III}}$ is the space of torsions. With respect to $\g_0$-action, $\mathbb{V}_{{\rm I}}$ has homogeneity $2+\delta_{2,n}$, $\mathbb{V}_{{\rm II}}$ has homogeneity $2$, and $\mathbb{V}_{{\rm III}}$ has homogeneity $1$.
The harmonic curvature splits in accordance to the above into irreducible components (projections to which are the usual symmetrizers)
$$ \kappa_H=\kappa_{\rm{I}}+\kappa_{\rm{II}}+\kappa_{\rm{III}},
$$ where (we refer to \cite{CEMN} for explicit formulae; we only need to know the tensorial type to prove Theorem \ref{Thm2})
\begin{itemize}
\item $\kappa_{\rm{I}}$ is the $(2,0)$-part of Weyl projective curvature of $\nabla$ for $n > 2$, or the $(2,0)$-part of the Liouville tensor when $n=2$;
\item $\kappa_{\rm{II}}$ is the $(1,1)$-part of Weyl projective curvature tensor of $\nabla$;
\item $\kappa_{\rm{III}}$ is $\frac14N_J$ (torsion of a minimal complex connection $\nabla$).
\end{itemize} We remark that on a complex background $(M,J)$ ($\kappa_{\rm{III}} = 0$):
\begin{itemize}
\item Existence of a holomorphic connection in $[\nabla]$ is equivalent to $\kappa_{\rm{II}} = 0$ (see \cite[Prop.\ 3.1.17]{CS}).
\item $\kappa_{\rm{I}} = 0$ is a necessary condition for $(M,J,[\nabla])$ to be (pseudo-) K\"ahler
metrizable (the curvature is of type (1,1)).
\end{itemize}
We now summarize an abstract description of $\mathbb{V}_{{\rm I}}, \mathbb{V}_{{\rm II}}, \mathbb{V}_{{\rm III}}$ that will be used in the sequel. The Satake diagram encoding the real Lie algebra $\g = \mathfrak{sl}(n+1,\C)_\R$ has $n$ nodes in the top and bottom rows:
\[ \SLCR[].
\] Dynkin diagram of the complexification $\g_\C \cong \mathfrak{sl}(n+1,\C) \times \mathfrak{sl}(n+1,\C)$ is obtained by removing all arrows from the above Satake diagram. As $\g$-modules, $\g_\C \cong \g \oplus \bar\g$, where $\g$ and $\bar\g$ correspond to the $\pm i$-eigenspaces for the natural $\g$-invariant complex structure on $\g$. Pictorially, $\g$ and $\bar\g$ correspond respectively to the top and bottom rows of the Satake diagram, and conjugation swaps these factors by reflection in the indicated arrows. The original real Lie algebra $\g$ is naturally identified with the fixed point set under conjugation, i.e.\ $\g \cong \{ x + \overline{x} : x \in \g \}$.
The choice of parabolic $\mathfrak{p}\subset\mathfrak{sl}(n+1,\C)_\R$ is encoded by marking the Satake diagram with crosses:
$$ \CPgen[].
$$
The Satake diagram of the semisimple part of $\g_0$ is obtained by removing the crossed nodes: $(\g_0)_\text{ss}\simeq\mathfrak{sl}(n,\C)_\R$. The parabolic $\mathfrak{p}_\C \subset \g_\C$ induces a grading of $\g_\C$ and we have $\mathbb{V}_\C = H^2(\g_-,\g) \otimes \C \cong H^2((\g_\C)_-, \g_\C)$. Using Kostant's version of the Bott--Borel--Weil theorem \cite{BE,CS}, the computation of $(\g_\C)_0$-module structure of $H^2((\g_\C)_-, \g_\C)$ is algorithmically straightforward. Namely, each $(\g_\C)_0$-irrep, denoted $\mathbb{W}_\mu$, occurs with multiplicity one and its {\em lowest} weight is $\mu=-w\cdot\nu$, where\footnote{When working with the complexification $\g_\C$, we use barred quantities in association with the second (bottom) $\mathfrak{sl}(n+1,\C)$ factor.}
\begin{itemize}
\item we use the affine action $\cdot$ of the Weyl group of $\g_\C$ on $\g_\C$-weights.
\item $w = (jk)$ is a length two element of the Hasse diagram \cite{BE,CS} of $(\g_\C,\mathfrak{p}_\C)$. Here: $w=(12)$, $(1\bar{1})$, or $(\bar{1}\bar{2})$.
\item $\nu$ is the highest (minus lowest) weight of (the adjoint representation of) a simple ideal in $\g_\C$. Here: The highest weight of $\mathfrak{sl}(n+1,\C)$ is $\lambda = \lambda_1 + \lambda_n$, expressed in terms of the fundamental weights $\{ \lambda_i \}$, and we have $\nu = \lambda$ or $\nu = \bar\lambda$.
\end{itemize}
We encode $\mu$ as follows: express $-\mu$ in terms of the fundamental weights of $\g_\C$ and mark a given node of the Dynkin diagram of $\g_\C$ with its corresponding coefficient \cite{BE}. Here, $\mathbb{V}_\C$ decomposes into six $(\g_\C)_0$-irreps occurring in three conjugate pairs, and this accounts for the three $\g_0$-irreps in $\mathbb{V}$. For the real case, we take the same marked Dynkin diagrams but now include the arrows so as to obtain a marked Satake diagram. Conjugate copies are indicated by the symbol $\operatorname{Cc}$.
\begin{table}[h]
$$
\hskip-7pt\begin{array}{c|c|c|c} \hline
\mbox{\small{Type}} & n > 3 & n =3 & n =2 \\ \hline
\mbox{I} & \CPgen{-4,1,1,0,0,1}{0,0,0,0,0,0} \oplus \mbox{Cc} & \CPthree{-4,1,2}{0,0,0}\oplus \mbox{Cc} & \CPtwo{-5,1}{0,0}\oplus \mbox{Cc} \\
\mbox{II} & \CPgen{-3,2,0,0,0,1}{-2,1,0,0,0,0} \oplus \mbox{Cc} & \CPthree{-3,2,1}{-2,1,0}\oplus \mbox{Cc} & \CPtwo{-3,3}{-2,1}\oplus \mbox{Cc} \\
\mbox{III} & \CPgen{1,0,0,0,0,1}{-3,0,1,0,0,0} \oplus \mbox{Cc} & \CPthree{1,0,1}{-3,0,1}\oplus \mbox{Cc} & \CPtwo{1,1}{-3,0}\oplus \mbox{Cc} \\ \hline \end{array}
$$
\caption{Irreducible harmonic curvature components}
\label{F:Kh-comp} \end{table}
Each $\g_0$-irrep, denoted as $\mathbb{V}_\mu$, complexifies to $(\mathbb{V}_\mu)_\C \cong \mathbb{W}_\mu \oplus \overline{\mathbb{W}_\mu}$ as $(\g_\C)_0$-irrep (and $\g_0$-irrep). Here, $\mathbb{V}_\mu$ is identified with its fixed point set under conjugation, i.e.\ $\mathbb{V}_\mu \cong \{ \phi + \overline{\phi} : \phi \in \mathbb{W}_\mu \}$. Kostant's theorem explicitly describes a lowest weight vector $\phi_0$ in each $(\g_\C)_0$-irrep $\mathbb{W}_\mu$. Without loss of generality, $\mu = -w \cdot \lambda$ with $w = (jk)$. Then
\[
\phi_0=e_{\alpha_j}\we e_{\z_j(\alpha_k)}\ot v,
\]
in terms of root vectors $e_\beta$, simple roots $\{ \alpha_j \}$, the simple reflection $\sigma_j$, and $v \in \fg_\C$ a weight vector having weight $-w(\lambda)$.
\begin{table}[h]
\[
\begin{array}{c|c|c}
\mbox{{\small Type}} & w & \phi_0 \\ \hline \mbox{I} & (12) & \left\{ \begin{array}{ll} e_{\alpha_1} \wedge e_{\alpha_1+\alpha_2} \otimes e_{-\alpha_2-\dots-\alpha_n}, & n > 2; \\ e_{\alpha_1} \wedge e_{\alpha_1+\alpha_2} \otimes e_{\alpha_1}, & n = 2 \end{array} \right. \\ \mbox{II} & (1\bar{1}) & e_{\alpha_1} \wedge e_{\overline{\alpha}_1} \ot e_{-\alpha_2-\dots-\alpha_n} \\ \mbox{III} & (\bar{1}\bar{2}) & e_{\overline{\alpha}_1} \we e_{\overline{\alpha}_1+\overline{\alpha}_2} \ot e_{-\alpha_1-\dots-\alpha_n} \\ \hline \end{array}
\]
\caption{Lowest weight vectors for harmonic curvature modules}
\label{F:Kh-lw}
\end{table}
\section{Upper bound on the submaximal symmetry dimension}\label{S2}
A universal upper bound $\mathfrak{U}$ on the submaximal symmetry dimension $\mathfrak{S}$ for regular normal parabolic geometries of type $(G,P)$ was proved in \cite{KT}. In terms of $\mathbb{V} = H^2_+(\g_-,\g)$, we have
$$
\mathfrak{U} := \max\{\dim(\fa^\psi) :\,0\neq\psi\in \mathbb{V}\},
$$ where $\fa^\psi$ is the {\em Tanaka prolongation} of the pair $(\g_{-},\fa_0=\mathfrak{ann}_{\g_0}(\psi))$ in $\g$. Namely, $\fa^\psi = \g_- \oplus \fa_0 \oplus \fa_+$ is the graded Lie subalgebra of $\g$ with
\begin{equation}\label{E:a-phi} \fa_k = \{ X \in \g_k : \operatorname{ad}^k_{\g_{-1}}(X) \,\cdot \,\psi = 0 \},\ k\ge1.
\end{equation}
To calculate $\mathfrak{U}$ it suffices to decompose $\mathbb{V}$ into $\g_0$-irreps, calculate the corresponding maximum for each submodule, and then take the maximum of these. The calculation becomes particularly easy for those $(G,P)$ that are {\em prolongation-rigid\/} (as defined in \cite{KT}): for any $0 \neq \psi \in \mathbb{V} = H^2_+(\g_-,\g)$, we have $\fa^\psi_+ = 0$, so that $ \fa^\psi = \g_- \oplus \mathfrak{ann}_{\g_0}(\psi)$.
In \cite{KT}, the {\em complex\/} case was thoroughly investigated. In particular, if $(\g,\fp)$ are complex Lie algebras and $\mathbb{V}_\mu \subset \mathbb{V}$ is a $\g_0$-irrep with lowest weight vector $\phi_0$ (and lowest weight $\mu$), then it was proved in \cite{KT} that
\begin{enumerate}
\item[(i)] $\fU_\mu = \max\{\dim(\fa^\psi) :\,0\neq\psi\in \mathbb{V}_\mu \}$ is realized by $\dim(\fa^{\phi_0})$;
\item[(ii)] $\fa^{\phi_0}_+ = 0$ if and only if all integers above crossed nodes for $\mu$ are {\it nonzero\/}. In this case, $\fU_\mu = \dim(\g_-) + \dim(\mathfrak{ann}(\phi_0))$.
\end{enumerate}
If (ii) is satisfied for each $\mathbb{V}_\mu \subset \mathbb{V}$, then $(G,P)$ is prolongation-rigid.
We now consider the case of general {\em real\/} Lie groups underlying given complex Lie groups $(G,P)$, and refer to the marked Satake diagram notation as before (see \cite{CS}). The {\em complexification} of any given real $G_0$-irrep $\mathbb{V}_\mu\subset\mathbb{V} = H^2_+(\g_-,\g)$ is either:
\begin{enumerate}
\item[(i)] $\mathbb{W}_\mu \cong \overline{\mathbb{W}_\mu}$, or
\item[(ii)] $\mathbb{W}_\mu \oplus \overline{\mathbb{W}_\mu}$ (if $\mathbb{W}_\mu \not\cong \overline{\mathbb{W}_\mu}$)
\end{enumerate} for some $\g_\C$-weight $\mu$. In either case, we will (abuse notation and) refer to the given (real) $G_0$-irrep as $\mathbb{V}_\mu$. Note that (i) occurs if and only $\mu$ is self-conjugate. For c-projective structures, only (ii) occurs. Defining $\fU_\mu = \max\{\dim(\fa^\psi) :\,0\neq\psi\in \mathbb{V}_\mu \}$, where now $\fa^\psi$ is a {\em real} Lie algebra, we respectively have:
\begin{enumerate}
\item[(i)] $\fU_\mu = \max\{\dim(\fa^{\phi}) :\,0\neq\phi\in \mathbb{W}_\mu\}$;
\item[(ii)] $\fU_\mu = \max\{\dim(\fa^{\phi + \bar\phi}) :\,0\neq\phi\in \mathbb{W}_\mu\}$.
\end{enumerate}
The following general result is based on \cite[Prop.\ 3.1.1]{KT}.
\begin{prop}\label{P:lw-vec} Let $G$ be a complex semisimple Lie group, and let $P$ be a parabolic subgroup with reductive part $G_0$. Let $\mathbb{W}$ be a (complex) $G_0$-irrep with $\phi_0 \in \mathbb{W}$ an extremal weight vector. Regarding $G$ and $P$ as real Lie groups, we have for $k \geq 0$ and any $0 \neq \phi \in \mathbb{W}$:
\begin{enumerate}
\item[(i)] if $\mathbb{W} \cong \overline{\mathbb{W}}$: $\dim(\fa_k^{\phi}) \leq \dim(\fa_k^{\phi_0})$;
\item[(ii)] if $\mathbb{W} \not\cong \overline{\mathbb{W}}$: $\dim(\fa_k^{\phi + \bar\phi}) \leq \dim(\fa_k^{\phi_0 + \overline{\phi_0}})$.
\end{enumerate}
\end{prop}
\begin{proof} We prove (ii).
Fix $k \geq 0$, and let $\psi = \phi + \bar\phi$. From \eqref{E:a-phi}, $\fa_k^\psi = \ker(M(\psi))$, where $M(\psi)$ is some real matrix that depends $\R$-linearly on $\psi$. The rank of a matrix is a lower semi-continuous function of its entries, so the function $\mathcal{F} : \mathbb{W} \to \mathbb{Z}$ given by
$\mathcal{F}(\phi) = \dim_\R(\fa_k^{\phi + \bar\phi})$ is upper semi-continuous. Clearly, $\mathcal{F}(c \phi) = \mathcal{F}(\phi)$ for any $c \in \R^\times$. Note that $\mathcal{F}$ is constant on $G_0$-orbits, and since $\mathfrak{z}(\g_0)$ contains a grading element, then $G_0$ contains elements that act on $\mathbb{W}$ by arbitrary $c \in \C^\times$. Thus, $\mathcal{F}$ descends to the {\em complex} projectivization $\mathbb{P}(\mathbb{W})$.
It is well-known that $\mathbb{P}(\mathbb{W})$ contains a {\em unique} closed $G_0$-orbit, namely $\mathcal{O} = G_0 \cdot [\phi_0]$. Thus, $\mathcal{O}$ is in the closure of {\em every} $G_0$-orbit in $\mathbb{P}(\mathbb{W})$. Hence, since $\mathcal{F} : \mathbb{P}(\mathbb{W}) \to \mathbb{Z}$ is upper semi-continuous and constant on $G_0$-orbits, then (ii) follows immediately. Proving (i) is similar.
\end{proof}
\begin{prop}\label{P:PR} Let $G$ be a complex semisimple Lie group, and let $P$ be a parabolic subgroup with reductive part $G_0$. Regard $G$ and $P$ as real Lie groups. Then $(G,P)$ is prolongation-rigid if for every $G_0$-irrep $\mathbb{V}_\mu \subset \mathbb{V} = H^2_+(\g_-,\g)$
the integers over every pair of crossed nodes on the Satake diagram of $\mu$ joined by an arrow are not both zero.
\end{prop}
\begin{proof} It suffices to prove the result for a single $\mathbb{V}_\mu$. We have the $\g_0$-reps $(\mathbb{V}_\mu)_\C \cong \mathbb{W}_\mu$ or $(\mathbb{V}_\mu)_\C \cong \mathbb{W}_\mu \oplus \overline{\mathbb{W}_\mu}$. Regard these as $(\g_\C)_0$-reps. Repeating the proof of Proposition \ref{P:lw-vec}, but now in the complex case, we find that the complex Lie algebra $\fa^\psi_k$ for each $k \geq 0$ has maximum dimension among $0 \neq \psi \in \mathbb{V}_\mu$ when $\psi = \phi_0$ or $\psi = \phi_0 + \overline{\phi_0}$ respectively. If the Satake (hence the Dynkin) diagram for $\mu$ satisfies the given condition, then by \cite[eqn (3.2) and Thm.\ 3.3.3]{KT}, we have $\fa^{\phi_0}_+ = 0$ or $\fa^{\phi_0 + \overline{\phi_0}}_+ = 0$ respectively. This being true for each $\mathbb{V}_\mu$ forces prolongation-rigidity of $(\g_\C,\mathfrak{p}_\C)$, and hence prolongation-rigidity of $(\g,\mathfrak{p})$.
\end{proof}
From Table \ref{F:Kh-comp}, we immediately see that the criteria of Proposition \ref{P:PR} are satisfied for (minimal) c-projective structures.
\begin{cor}\label{cor2}
C-projective geometry is prolongation-rigid. \qed
\end{cor}
We will use the notations $\fS_i$ and $\fU_i$ referring to a specific curvature type. Thus, for each c-projective type, since $2n = \dim(\g_-)$, we have
\begin{align} \label{E:SA}
\mathfrak{S}_\mu \leq \mathfrak{U}_\mu = \dim(\fa^{\phi_0 + \overline{\phi_0}}) = 2n + \dim(\mathfrak{ann}_{\g_0}(\phi_0 + \overline{\phi_0})).
\end{align}
Using the data from Tables \ref{F:Kh-comp} and \ref{F:Kh-lw}, the annihilators $\fa_0 = \mathfrak{ann}_{\g_0}(\phi_0 + \overline{\phi_0})$ for all three types are computed in Table \ref{F:ann}.
\begin{footnotesize}
\begin{center}
\begin{table}[h]
\[
\begin{array}{|c|c|c|} \hline
\mbox{Type} & \fa_0 \,\, (n \geq 3) & \fa_0 \,\, (n=2) \\ \hline
\mbox{I} & \begin{array}{c}
\left(\begin{array}{c|cccccc}
a_0 & 0 & 0 & 0 & \cdots & 0 & 0\\ \hline
0 & a_1 & 0 & 0 & \cdots & 0 & 0\\
0 & * & a_2 & 0 & \cdots & 0 & 0\\
0 & * & * & a_3 & \cdots & * & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & 0\\
0 & * & * & * & \cdots & a_{n-1} & 0\\
0 & * & * & * & \cdots & * & a_n\\
\end{array} \right) \\
a_0+\dots+a_n=0,\\
2(a_0-a_1)-a_2+a_n=0,\\
\dim_\R(\fa_0) = 2(n^2-3n+5)
\end{array} &
\begin{array}{c}
\left(\begin{array}{c|cc}
a_0 & 0 & 0 \\ \hline
0 & a_1 & 0 \\
0 & * & a_2 \\
\end{array} \right) \\
a_0 + a_1 + a_2 = 0,\\
3a_0 - 2a_1 - a_2 = 0,\\
\dim_\R(\fa_0) = 4
\end{array}\\ \hline
\mbox{II} &
\begin{array}{c}
\left(\begin{array}{c|cccccc}
a_0 & 0 & 0 & 0 & \cdots & 0 & 0\\ \hline
0 & a_1 & 0 & 0 & \cdots & 0 & 0\\
0 & * & a_2 & * & \cdots & * & 0\\
0 & * & * & a_3 & \cdots & * & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & 0\\
0 & * & * & * & \cdots & a_{n-1} & 0\\
0 & * & * & * & \cdots & * & a_n\\ \end{array} \right)\\
a_0+\dots+a_n=0,\\
2\op{Re}(a_0-a_1)=a_1-a_n,\\
\dim_\R(\fa_0) = 2(n^2-2n+2)
\end{array} &
\begin{array}{c}
\left(\begin{array}{c|cc}
a_0 & 0 & 0 \\ \hline
0 & a_1 & 0 \\
0 & * & a_2 \\
\end{array} \right) \\
a_0 + a_1 + a_2 = 0,\\
2\op{Re}(a_0-a_1) = a_1 - a_2,\\
\dim_\R(\fa_0) = 4
\end{array}\\ \hline
\mbox{III} &
\begin{array}{c}
\left(\begin{array}{c|cccccc}
a_0 & 0 & 0 & 0 & \cdots & 0 & 0\\ \hline
0 & a_1 & * & 0 & \cdots & 0 & 0\\
0 & * & a_2 & 0 & \cdots & 0 & 0\\
0 & * & * & a_3 & \cdots & * & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & 0\\
0 & * & * & * & \cdots & a_{n-1} & 0\\
0 & * & * & * & \cdots & * & a_n\\ \end{array} \right)\\
a_0+\dots+a_n=0,\\
2\overline{a_0}-a_0=\overline{a_1}+\overline{a_2}-a_n\\
\dim_\R(\fa_0) = 2(n^2-3n+6)
\end{array} &
\begin{array}{c}
\left(\begin{array}{c|cc}
a_0 & 0 & 0 \\ \hline
0 & a_1 & 0 \\
0 & * & a_2 \\
\end{array} \right) \\
a_0 + a_1 + a_2 = 0,\\
2\overline{a_0}-a_0=\overline{a_1}+\overline{a_2}-a_2,\\
\dim_\R(\fa_0) = 4
\end{array}\\ \hline
\end{array}
\]
\caption{Annihilators $\fa_0 = \mathfrak{ann}_{\g_0}(\phi_0 + \overline{\phi_0})$ associated to harmonic curvature types}
\label{F:ann}
\end{table}
\end{center}
\end{footnotesize}
The following recipe for computing $\mathfrak{ann}_{\g_0}(\phi_0 + \overline{\phi_0})$ is analogous to those discussed in \cite{KT}:
\begin{enumerate}
\item Put asterisks over any pair of uncrossed nodes connected by an arrow in the Satake diagram of $\mu$, if one of the nodes has a nonzero coefficient. This determines an (opposite\footnote{Standard parabolics in this paper are (block) upper triangular, but since we use {\em lowest} weight vectors, the annihilators in $\g_0$ have (block) lower triangular shape.}) parabolic in $(\g_0)_{ss}$ and hence the general shape of $\mathfrak{ann}_{\g_0}(\phi_0 + \overline{\phi_0})$.
\item Diagonal elements $X \in \mathfrak{sl}(n+1,\C)_\R$ satisfy $\mu(X) = 0$, since
\[
X \cdot (\phi_0 + \overline{\phi_0}) = \mu(X) \phi_0 + \overline{\mu}(X) \overline{\phi_0}.
\]
This condition becomes clear by converting $\mu$ into root notation.
\end{enumerate}
\begin{examp}
Consider the type III case when $n \geq 3$. Then
\[
\CPgen{1,0,0,0,0,1}{-3,0,1,0,0,0} \quad\leadsto\quad
\begin{tikzpicture}[scale=\myscale,baseline=-3pt]
\tikzstyle{every node}=[font=\tiny]
\newcommand\myb{0.3}
\bond{0,\myb};
\bond{1,\myb};
\bond{2,\myb};
\tdots{3,\myb};
\bond{4,\myb};
\draw (0,-\myb) node[above=-1pt] {$\updownarrow$};
\draw (1,-\myb) node[above=-1pt] {$\updownarrow$};
\draw (2,-\myb) node[above=-1pt] {$\updownarrow$};
\draw (3,-\myb) node[above=-1pt] {$\updownarrow$};
\draw (4,-\myb) node[above=-1pt] {$\updownarrow$};
\draw (5,-\myb) node[above=-1pt] {$\updownarrow$};
\bond{0,-\myb};
\bond{1,-\myb};
\bond{2,-\myb};
\tdots{3,-\myb};
\bond{4,-\myb};
\DDnode{x}{0,\myb}{};
\DDnode{w}{1,\myb}{};
\DDnode{s}{2,\myb}{};
\DDnode{w}{3,\myb}{};
\DDnode{w}{4,\myb}{};
\DDnode{s}{5,\myb}{};
\DDnode{x}{0,-\myb}{};
\DDnode{w}{1,-\myb}{};
\DDnode{s}{2,-\myb}{};
\DDnode{w}{3,-\myb}{};
\DDnode{w}{4,-\myb}{};
\DDnode{s}{5,-\myb}{};
\draw (0,-\myb) node[below=2pt] {};
\draw (1,-\myb) node[below=2pt] {};
\draw (2,-\myb) node[below=2pt] {};
\draw (3,-\myb) node[below=2pt] {};
\draw (4,-\myb) node[below=2pt] {};
\draw (5,-\myb) node[below=2pt] {};
\end{tikzpicture}
\]
determines the shape of the annihilator as listed in Table \ref{F:ann}. Now express the weight in terms of the simple roots $\alpha_j = \epsilon_j - \epsilon_{j+1}$, where $\epsilon_j$ are the functionals that extract the $j$-th diagonal element of $\mathfrak{sl}(n+1,\C)$:
\begin{align*}
-\mu &= \lambda_1 + \lambda_n - 3\overline{\lambda_1} + \overline{\lambda_3} = \alpha_1 + ... + \alpha_n - 2\overline{\alpha_1} - \overline{\alpha_2}\\
&= \epsilon_1 - \epsilon_{n+1} - 2\overline{\epsilon_1} + \overline{\epsilon_2} + \overline{\epsilon_3}
\end{align*}
This determines the remaining condition on the annihilator.
\end{examp}
Using \eqref{E:SA}, we compute each $\mathfrak{U}_\mu$ and obtain the dimensions listed for $\mathfrak{S}_\mu$ in Theorem \ref{Thm2}, except for type I, $n=2$ for which $\mathfrak{U}_{{\rm I}} = 8$.
In the type I case, the Cartan geometry is equivalent to a complex parabolic geometry of type $(G,P)$ (where these are regarded as complex Lie groups), whose underlying structure is a holomorphic projective structure. These submaximal symmetry dimensions were classified in \cite{KT} (see \cite{E$_1$} for the real projective case). In terms of the $\C$-dimension $n$ of the underlying complex manifold, we have $\fS_\C = n^2 - 2n + 5$ when $n \geq 3$ and $\fS_\C = 3$ when $n=2$. Regarded as a c-projective structure of type I, these complex dimensions simply double to get the corresponding real dimensions. The $n=2$ case is the well-known exception that is the holomorphic analogue of 2-dimensional projective structures, see \cite[\S4.3]{KT}. This finishes the type I case.
It remains to show that $\fS_{{\rm II}} = \fU_{{\rm II}}$ and $\fS_{{\rm III}} = \fU_{{\rm III}}$. This is accomplished in Section \ref{S3} by exhibiting type II and type III models whose c-projective symmetries realize the calculated upper bounds. Alternatively, we now give an abstract proof of realizability via the same technique as used in \cite[\S4.1]{KT}. There, an abstract model of the regular normal parabolic geometry was constructed by a deformation idea. Here, fix a type and consider the (graded) Tanaka algebra $\fa := \fa^{\psi} = \g_{-}\oplus\fa_0^{\psi} \subset\g$, where $\psi = \phi_0 + \overline{\phi_0}$. Define $\ff = \fa$ as vector spaces, and consider the deformed bracket $[\cdot,\cdot]_{\ff} = [\cdot,\cdot]_\fa - \psi(\cdot,\cdot)$ by regarding $\psi$ as a 2-cochain. From Table \ref{F:Kh-lw}, we see that with the exception of the $n=2$ type I case, $\psi$ has image in $\g_- \subset \fa$, so $[\cdot,\cdot]_{\ff}$ is well-defined. As in \cite[Lemma 4.1.1]{KT}, the Jacobi identity on $\ff$ reduces to:
\begin{align} \label{E:Jac} \op{Jac}_{\ff}(x,y,z) = \psi(\psi(x,y),z) + \psi(\psi(y,z),x) + \psi(\psi(z,x),y).
\end{align}
From Table \ref{F:Kh-lw}, for $n \geq 3$, the output of $\psi$ does not depend on any of the root spaces involved in the input to the 2-cochain $\psi$. Hence, by \eqref{E:Jac}, the Jacobi identity holds for $\ff$. For $n=2$, this argument works only in the type II case. In all valid cases, i.e.\ when $\ff$ is a Lie algebra, $\ff/\fa_0$ integrates to a {\em local} homogeneous space $M = F / A_0$. This space will support a c-projective structure whose symmetry algebra is isomorphic to $\ff$. This is asserted by an extension functor argument \cite{CS}: Consider the principal $P$-bundle $\mathcal{G} = F \times_{A_0} P \to M$. An $\ff$-invariant Cartan connection of type $(G,P)$ is determined by the algebraic data of a linear map $\varphi : \ff \to \g$ that is $\fa_0$-equivariant and satisfies $\varphi|_{\fa_0} = \operatorname{id}_{\fa_0}$. Using the vector space identification $\ff = \fa$, consider the $\ff$-invariant Cartan connection determined by $\varphi = \operatorname{id}_{\ff}$. Its full curvature corresponds to
\[
[\varphi(x),\varphi(y)]_\fa - \varphi([x,y]_\ff) = [x,y]_\fa - [x,y]_\ff = \psi(x,y), \qquad \forall x,y \in \g_-.
\]
Hence, the curvature is purely harmonic. Thus, we have constructed a regular normal Cartan geometry of type $(G,P)$, and its underlying structure is a c-projective geometry of the given type.
When $n=2$, the above argument fails for:
\begin{itemize}
\item type I: the deformation by $\psi$ is not well-defined. As remarked earlier, $\fS_{{\rm I}} = 6 < \fU_{{\rm I}} = 8$.
\item type III: the Jacobi identity fails for $\ff$. However, a different deformation of $\fa_0$ is possible, and a model is given in Section \ref{S3}; see the details in Appendix \ref{S.A}. Thus, $\fS_{{\rm III}} = \fU_{{\rm III}} = 8$.
\end{itemize} This concludes the proof of Theorem \ref{Thm2}.
\section{Submaximal models in the three curvature types}\label{S3}
Let us specify explicit models\footnote{We re-numerate some indices in the matrix models $\mathfrak{a}_0$ of Section \ref{S2}, e.g.\ $2\leftrightarrow n$ for type II etc. This helps seeing the stabilization of the models.} realizing the universal bounds for non-flat c-projective structures of the pure curvature types $\mathbb{V}_i$. We do not claim unicity of these models at this point, they only prove sharpness of the estimates.
{\bf Type I, ${\mathbf{n>2}}$.} This is the holomorphic version of the Egorov's symmetric connection \cite{E$_1$}, given in the real case by the Christoffel symbols $\Gamma^1_{23}=x^2$, $\Gamma^i_{jk}=0$ else, in coordinates $(x^1,\dots,x^n)$ on $\R^n$.
The holomorphic version $\nabla$ is given by (to get real structures we add complex conjugate to all equations, shortening this to +Cc)
$$ \Gamma^1_{23}=z^2\ \text{ (+Cc: }\Gamma^{\bar1}_{\bar2\bar3}=\overline{z^2})
$$ (and $\Gamma^l_{jk}=0$ for all other barred/un-barred indices) in the coordinate system $(z^1,\dots,z^n)$ on $\C^n$. The complex structure is standard $J=i$.
In the real coordinates $(\tilde x^1,\dots,\tilde x^{2n})$ given by $z^k=\tilde x^{2k-1}+i \tilde x^{2k}$ we have the following non-zero Christoffel indices:
$$ \Gamma^{\tilde1}_{\tilde3\tilde5}=\Gamma^{\tilde2}_{\tilde3\tilde6}=\Gamma^{\tilde2}_{\tilde4\tilde5}= -\Gamma^{\tilde1}_{\tilde4\tilde6}=\tilde x^3, \quad \Gamma^{\tilde2}_{\tilde3\tilde5}=-\Gamma^{\tilde1}_{\tilde3\tilde6}=-\Gamma^{\tilde1}_{\tilde4\tilde5}= -\Gamma^{\tilde2}_{\tilde4\tilde6}=\tilde x^4.
$$ The harmonic curvature has type I and is non-zero: $\kappa_H=\kappa_{{\rm I}}\ne0$. Indeed, the curvature of $\nabla$ in the complex coordinates is equal to
$$ W_\nabla=dz^2\we dz^3\ot z^2\p_{z^1}\ \text{ (+Cc)}.
$$
The c-projective symmetries are found from the equations specified in \S\ref{S1} to be the real and imaginary parts of the following (linearly independent) holomorphic vector fields:
\begin{gather*} \p_{z^1},\quad \p_{z^3},\ \dots,\ \p_{z^n}, \quad z^i\p_{z^j}\quad (i\ge2,\ j\ne2,3),\\ 2z^1\p_{z^1}+z^2\p_{z^2},\quad z^1\p_{z^1}+z^3\p_{z^3},\quad z^2z^3\p_{z^1} - \p_{z^2},\quad (z^2)^3\p_{z^1} - 3z^2\p_{z^3}.
\end{gather*} Since the totality of these $2\cdot(n^2-2n+5)$ coincides with the universal upper bound, these are all symmetries, and so the above $(J,[\nabla])$ is a sub-maximal c-projective structure of curvature type I.
{\bf Type I, ${\mathbf{n=2}}$.} Real projective structures on $\R^2$ were studied by Lie and Liouville \cite{Lio}, and Tresse \cite{Tr} classified submaximal projective connections (in retrospective, as this notions was introduced later by Cartan \cite{C}; Tresse studied the corresponding 2nd order ODEs). Complexification yields a submaximal c-projective structure with respect to the standard complex structure $J=i$ on $\C^2$: $\nabla$ is the complex connection with the non-zero Christoffel symbols
$$ \Gamma^1_{22}=-\Gamma^1_{11}=\frac1{2z^1}\ \text{ (+Cc).}
$$ The c-projective symmetries are real and imaginary parts of the holomorphic fields (altogether 6 symmetries)
$$ \p_{z^2},\ z^1\p_{z^1}+z^2\p_{z^2},\ z^1z^2\p_{z^1}+\tfrac12(z^2)^2\p_{z^2}.
$$
{\bf Type II.} Consider the complex connection $\nabla$ with respect to the standard complex structure $J=i$ on $\C^n$ given in the complex coordinates $(z^1,\dots,z^n)$ by non-zero Christoffel symbols
\begin{equation}\label{subCmax} \Gamma_{11}^2=\overline{z^1}\quad\text{(+Cc: } \Gamma_{\bar1\bar1}^{\bar2}=z^1).
\end{equation} Its curvature has pure type II, $\kappa_H=\kappa_{{\rm II}}\ne0$:
$$ W_\nabla=dz^1\we d\overline{z^1}\otimes z^1\p_{z^2}\ \text{ (+Cc)}.
$$ The c-projective symmetries are found from the equations specified in \S\ref{S1} to be the real and imaginary parts of the following (linearly independent) complex-valued vector fields:
\begin{gather*} \p_{z^2},\ \dots,\ \p_{z^n}, \quad z^i\p_{z^j}\quad (i\ne2,\ j>1),\\ z^1\p_{z^1}+2z^2\p_{z^2}+\overline{z^2}\p_{\overline{z^2}},\quad \p_{z^1}-\tfrac12(\overline{z^1})^2\p_{\overline{z^2}}.
\end{gather*} Since the totality of these $2(n^2-n+2)$ coincides with the universal upper bound, these are all symmetries, and so the above $(J,[\nabla])$ is a sub-maximal c-projective structure of curvature type II.
{\bf Type III, ${\mathbf{n>2}}$.} Sub-maximal symmetric almost complex structures on $\C^3$, i.e.\ maximally symmetric (measured via functional dimension and rank) among all non-integrable $J$, were classified in \cite{K$_4$}. There are two different such structures, but only one of them has the Nijenhuis tensor, given by the lowest weight vector from the Kostant's Borel-Bott-Weil theorem. Namely, the structure in complex coordinates $(z^1,z^2,z^3)$ is given by
$$ J\p_{z^1}=i\p_{z^1}+z^2\p_{\overline{z^3}},\ J\p_{z^2}=i\p_{z^2},\ J\p_{z^3}=i\p_{z^3}.
$$ Let us extend it to $\C^n$ by multiplying with $(\C^{n-3},i)$, i.e.\ letting $J\p_{z^k}=i\p_{z^k}$ for $k>3$.
In the real coordinates $(\tilde x^1,\dots,\tilde x^6)$ given by $z^k=\tilde x^{2k-1}+i \tilde x^{2k}$ we have:
\begin{gather*} J\p_{\tilde x^1}=\p_{\tilde x^2}+\tilde x^3\p_{\tilde x^5}-\tilde x^4\p_{\tilde x^6},\ J\p_{\tilde x^2}=-\p_{\tilde x^1}-\tilde x^4\p_{\tilde x^5}-\tilde x^3\p_{\tilde x^6},\\ J\p_{\tilde x^{2k-1}}=\p_{\tilde x^{2k}},\ J\p_{\tilde x^{2k}}=-\p_{\tilde x^{2k-1}}\ (k>1).
\end{gather*}
To find a complex connection for this $J$, let $\tilde\nabla=d$ be the trivial connection, i.e.\ its Christoffel symbols vanish in the given coordinate system. Then $\nabla=\frac12(\tilde\nabla-J\tilde\nabla J)$ is a complex connection. Its non-zero Christoffel symbols are
$$ \Gamma_{21}^{\bar3}=\frac{i}2\ \text{ (+Cc: } \Gamma_{\bar2\bar1}^3=-\frac{i}2).
$$ In real coordinates these write so:
$$ \Gamma_{\tilde3\tilde1}^{\tilde6}=\Gamma_{\tilde3\tilde2}^{\tilde5}= \Gamma_{\tilde4\tilde1}^{\tilde5}=-\Gamma_{\tilde4\tilde2}^{\tilde6}=-\frac12,
$$ and, in particular, $\Gamma_{\tilde1\tilde3}^{\tilde6}=\Gamma_{\tilde2\tilde3}^{\tilde5}= \Gamma_{\tilde1\tilde4}^{\tilde5}=\Gamma_{\tilde2\tilde4}^{\tilde6}=0$.
Thus the torsion $T_\nabla\neq0$, while the curvature $R_\nabla=0$. In fact,
$$ N_J=-2i\,dz^1\we dz^2\ot\p_{\overline{z^3}}\ \text{ (+Cc)}.
$$ Consequently, $\kappa_H=\kappa_{\rm{III}}\neq0$, the connection $\nabla$ is minimal and the c-projective structure $(J,[\nabla])$ has curvature type III.
The c-projective symmetries are found from the equations specified in \S\ref{S1} to be the real and imaginary parts of the following (linearly independent) complex-valued vector fields:
\begin{gather*} \p_{z^1},\quad \p_{z^3},\ \dots,\ \p_{z^n}, \quad z^i\p_{z^j}\quad (i\ne3,\ j>2),\\ z^1\p_{z^1}+\overline{z^3}\p_{\overline{z^3}},\quad z^2\p_{z^2}+\overline{z^3}\p_{\overline{z^3}},\quad \p_{z^2}+\tfrac{z^1}{2i}(\p_{z^3}+\p_{\overline{z^3}}),\\ z^1\p_{z^2}-\tfrac{i}4(z^1)^2\p_{\overline{z^3}},\quad z^2\p_{z^1}-\tfrac{i}4(z^2)^2\p_{\overline{z^3}}.
\end{gather*} Since the totality of these $2\cdot(n^2-2n+6)$ coincides with the universal upper bound, these are all symmetries, and so the above $(J,[\nabla])$ is a sub-maximal c-projective structure of curvature type III.
{\bf Type III, ${\mathbf{n=2}}$.} This is the exceptional case, for which the method of Section \ref{S2} does not give even an abstract model. However the abstract bound $\dim\le8$ is sharp. We provide the local model $(M^4,J,[\nabla])$ in real coordinates $(x,y,p,q)$. The almost complex structure in the basis $e_1=\p_x$, $e_2=\p_y$, $e_3=\p_p$, $e_4=\p_q-\frac{3y}{2p}\,\p_x-\frac{5x}{2p}\,\p_y$ is given by:
$$ Je_1=e_2,\ Je_2=-e_1,\ Je_3=e_4,\ Je_4=-e_3.
$$ In the dual co-basis $\theta_1=dx+\frac{3y}{2p}\,dq$, $\theta_2=dy+\frac{5x}{2p}\,dq$, $\theta_3=dp$, $\theta_4=dq$ the minimal complex connection is given by:
\begin{alignat*}{2} \nabla e_1=\frac1{2p}\,e_2\ot\theta_4,\quad & \nabla e_3=-\frac1{p}\,(e_1\ot\theta_1-e_2\ot\theta_2+e_3\ot\theta_3+e_4\ot\theta_4)\\
& \ -\frac1{4p^2}\,\bigl(e_1\ot(3x\theta_3+3y\theta_4)+e_2\ot(3y\theta_3+13x\theta_4)\bigr)
\end{alignat*} (these relations are enough since $\nabla Je_k=J\nabla e_k$).
The torsion $T_\nabla$ is non-zero and represents $\kappa_{\op{III}}$. The curvature $R_\nabla$ is however also non-zero and has type $(1,1)$. One could suspect that this yields a harmonic curvature of type II ($\kappa_{\op{I}}=0$ because the (2,0)-part of $R_\nabla$ vanishes, and also because elsewise the dimension of $\mathfrak{cp}(\nabla,J)$ would be bounded by 6), but the structure equations show $\kappa_{\op{II}}=0$ (we delegate the details of this computation to the appendix).
Thus the above pair $(J,[\nabla])$ has harmonic curvature of pure type III. And its symmetry algebra has dimension 8, here are the generators:
\begin{gather*} x\p_x+y\p_y,\ p^{-3/2}\p_y,\ p\p_p+q\p_q,\ \p_q,\\ p\,(y\p_x-x\p_y)-2pq\,\p_p+(p^2-q^2)\p_q,\ \frac{p^2+q^2}{p^{3/2}}(p\p_x-q\p_y),\\ \frac{p^2+q^2}{2p^{3/2}}\p_y+\frac{q}{p^{3/2}}(q\p_y-p\p_x),\ p^{-3/2}(q\p_y-\tfrac13p\p_x)
\end{gather*} (notice that all the symmetries are actually affine).
This finishes realization (by models) of the universal bounds on submaximal symmetry dimensions.
\begin{rk}\label{trans} As a consequence of realization and the results of \cite{KT}, the equality $\fU=\fS$ yields local transitivity (around any regular point) for the symmetry algebra of any c-projective structure with submaximal symmetry (this also applies to general c-projective structures considered in the next section), as well as for any c-projective structure of fixed curvature type with the submaximal symmetry dimension $\fS_i$.
\end{rk}
\section{C-projective structures: the general case.}\label{S4}
In this section we encode general (not necessary minimal) c-projective structures as real (not necessary normal) parabolic geometries of type $\op{SL}(n+1,\C)_\R/P$.
Assume at first that $\pi:\mathcal{G}\to M$ is a principal $P$-bundle with a Cartan connection $\omega=\omega_{-1}+\omega_0+\omega_1\in\Omega^1(\mathcal{G},\g)$. The covariant derivative can be read off the Cartan connection of any $G_0$-reduction of this bundle (Weyl structure) by the following formula, cf. \cite[Proposition 1.3.4]{CS}:
$$ \nabla_XY=d\pi\circ\omega_{-1}^{-1}\bigl(\tilde X\cdot\omega_{-1}(\tilde Y) -\omega_0(\tilde X)(\omega_{-1}(\tilde Y))\bigr),
$$ where $\tilde X,\tilde Y$ are arbitrary lifts of $X,Y\in\mathcal{D}(M)$ to vector fields on $\mathcal{G}$ (independence of the lift for $Y$ is obvious, for $X$ follows from the equivariance of $\omega$; one also checks independence of the point $a\in\pi^{-1}(x)$).
Since the first frame bundle reduction, driven by the almost complex structure $J$, forces $\omega_{-1}$ to be a complex isomorphism between $(T_xM,J)$ and $(\C^n,i)$ and since $\omega_0$ takes values in $\mathfrak{gl}(n,\C)$, we conclude that $\nabla_XJY=J\nabla_XY$, i.e.\ $\nabla J=0$. Thus parabolic geometries encode c-projective geometries with classes of complex connections $\nabla$ only.
Moreover, a choice of connection $\nabla$ in a c-projective class with fixed (normalized) torsion corresponds to a Weyl structure of $(\mathcal{G},\omega)$ and a change of this gives a c-projective change of $\nabla$. Thus we have to show only that a c-projective class of a complex connection $\nabla$ with a fixed torsion can be represented as a parabolic geometry.
To begin with let us see how we can modify the torsion keeping the class of $J$-planar curves fixed. By \cite[Appendix A]{K$_1$} a (2,1)-tensor decomposes into $J$-linear/antilinear components as follows
$$ T_\nabla=T_\nabla^{++}+T_\nabla^{+-}+T_\nabla^{-+}+T_\nabla^{--},
$$ where $T^{\epsilon_1,\epsilon_2}_\nabla(J^{k_1}X,J^{k_2}Y)=\epsilon_1^{k_1}\epsilon_2^{k_2} J^{k_1+k_2}T^{\epsilon_1,\epsilon_2}_\nabla(X,Y)$.
The component $T^{++}$ is killed similar to the real case: $\nabla\simeq\nabla-\frac12T^{++}_\nabla$. The component $T^{--}_\nabla=\frac14N_J$ is invariant \cite{Lic}. Next $T_\nabla^{+-}=-T_\nabla^{-+}\circ\tau$, where $\tau:\Lambda^2V^*\ot V\to\Lambda^2V^*\ot V$ ($V=TM$) is swap of the first two arguments, so it is enough to treat $T_\nabla^{-+}$. Again by \cite{K$_1$} the gauge is
$$ T_\nabla^{-+}\mapsto T_\nabla^{-+} - A,
$$ where $A\in\Lambda^2V^*\ot V$ is antilinear-linear $(2,1)$-tensor, which means $A(JX,Y)=-A(X,JY)=-JA(X,Y)$ $\forall X,Y\in V$.
\begin{lem}\label{L1} An antilinear-linear $(2,1)$-tensor $A$ satisfying the property $A(X,X)\in\C\cdot X$ has the following form for some 1-form $\vp$:
$$ A(X,Y)=\vp(X)Y+\vp(JX)JY.
$$
\end{lem}
\begin{proof} By polarization $A(X,X)\in\C\cdot X$ implies
$$ A(X,Y)+A(Y,X)\in\C\cdot\langle X,Y\rangle.
$$ Substitution $Y\mapsto JY$ and use of $\C$-linearity/antilinearity (and cancelation of $J$) yields $A(X,Y)-A(Y,X)\in\C\cdot\langle X,Y\rangle$. Hence $A(X,Y)\in\C\cdot\langle X,Y\rangle$, i.e. $A(X,Y)=\Phi(X)Y+\Psi(Y)X$ for some $\C$-valued 1-forms $\Phi,\Psi$. Antilinearity/linearity by $X,Y$ resp.\ yields $\Psi(Y)=0$, and $\Phi(X)=\vp(X)+i\,\vp(JX)$.
\end{proof}
Denote the space of $A$-tensors from Lemma \ref{L1} by $\mathbb{T}^{-+}_\text{trace}$. This is a submodule of $\mathbb{T}^{-+}$ part of decomposition of the space of torsion tensors module $\mathbb{T}=\Lambda^2V^*\ot V$, decomposed into irreducible $\op{GL}(n,\C)$-submodules as follows (the part $\mathbb{T}^{+-}\simeq\mathbb{T}^{-+}$ appears with its swap, so only one part enters the decomposition)
$$ \mathbb{T}= \mathbb{T}^{++}_\text{trace}\oplus\mathbb{T}^{++}_\text{traceless} \oplus\mathbb{T}^{--}\oplus\mathbb{T}^{-+}_\text{traceless}\oplus\mathbb{T}^{-+}_\text{trace}.
$$ By traceless we mean that the endomorphism obtained by filling one argument of the $(2,1)$-tensor is traceless (and trace is the invariant complement). In complexification this decomposition writes\footnote{We abbreviate $\Lambda^{p,q}=\Lambda^{p,q}V^*$ and similar for $S^{p,q}$ throughout the text.}
\begin{multline*} \mathbb{T}^\C=(\La^{2,0}\ot V_{1,0}+\op{Cc})_\text{trace} \oplus(\La^{2,0}\ot V_{1,0}+\op{Cc})_\text{traceless}\oplus\\ (\La^{0,2}\ot V_{1,0}+\op{Cc}) \oplus(\La^{1,1}\ot V_{1,0}+\op{Cc})_\text{traceless} \oplus(\La^{1,1}\ot V_{1,0}+\op{Cc})_\text{trace},
\end{multline*} where $\op{Cc}$ stays for complex-conjugate as before.
Let us denote the projection to part $k$ in the above decomposition by $\pi_k$. From the discussion above we can kill $\pi_1(T_\nabla)$, $\pi_2(T_\nabla)$, $\pi_5(T_\nabla)$ by a choice of representative within the c-projective class of $\nabla$, but the components $\pi_3(T_\nabla)=\kappa_\text{III}$ and $\pi_4(T_\nabla)$ are invariant.
\begin{cor}\label{c3} The c-projective class $[\nabla]$ contains a minimal $J$-complex connection $\nabla$ iff $\pi_4(T_\nabla)=0$. \qed
\end{cor}
Remaining freedom in choosing $\nabla$ is given by the standard formula:
\begin{lem}\label{L2} Two $J$-complex connections $\nabla,\bar\nabla$ with vanishing parts 1,2,5 of the torsion are c-projectively equivalent iff
$$ \bar\nabla_XY=\nabla_XY+\Psi(X)Y+\Psi(Y)X-\Psi(JX)JY-\Psi(JY)JX
$$ for some 1-form $\Psi\in\Omega^1(M)$ (notice that $T_\nabla=T_{\bar\nabla}$).
\end{lem}
\begin{proof} Indeed, the tensor $A=\bar\nabla-\nabla$ satisfies $A^{-}=0$ and $A^{+}=A^{+}\circ\tau$ in terms of \cite{K$_1$}.
\end{proof}
Let us denote $\varrho=\pi_1+\pi_2+\pi_5$, so that the assumption of the Lemma is $\varrho(T_\nabla)=0$. Since this tensorial projection is applicable to the lowest part of the curvature $\kappa$ of the Cartan connection $\omega$, viewed as $P$-equivariant function, we rewrite the equality as $\varrho(\kappa_1)=0$, where $\kappa_i$ is the part of the curvature $\kappa$ of $\fg_0$-homogeneity $i$.
Recall \cite{CS} that the Kostant codifferential $\p^*:\Lambda^i\g_-^*\ot\g\to\Lambda^{i-1}\g_-^*\ot\g$ is adjoint to the Lie algebra differential $\p:\Lambda^i\mathfrak{g}_-^*\ot\g\to\Lambda^{i+1}\mathfrak{g}_-^*\ot\g$.
Consider the curvature module $\mathbb{U}=\Lambda^2\mathfrak{g}_-^*\ot\g$ over $\g_0=\mathfrak{gl}(n,\C)$, and decompose it into the graded parts $\mathbb{U}=\mathbb{U}_1\oplus\mathbb{U}_2\oplus\mathbb{U}_3$. For $\mathbb{U}_2$ denote the sum of real $\g_0$-irreps by $\mathbb{U}_2^r$ and the sum of complex $\g_0$-irreps by $\mathbb{U}_2^c$ (there are no quaternionic parts).
Let us define $\mathfrak{C}_1=\op{Ker}(\varrho)\subset\mathbb{U}_1$, $\mathfrak{C}_2=(\op{Ker}(\p^*)\cap\mathbb{U}_2^c)\oplus(\mathfrak{p}_+\cdot\mathfrak{C}_1\cap\mathbb{U}_2^r)$, and also $\mathfrak{C}_3=\mathbb{U}_3$. Then $\mathfrak{C}=\mathfrak{C}_1\oplus\mathfrak{C}_2\oplus\mathfrak{C}_3$ \ is a $G_0$-submodule of $\mathbb{U}$.
\begin{prop}\label{P3} The submodule $\mathfrak{C}$ is $P$-invariant.
\end{prop}
\begin{proof} It suffices to check $\mathfrak{p}_+$-invariance. Let us decompose the graded parts of $\mathbb{U}=\mathbb{U}_1\oplus\mathbb{U}_2\oplus\mathbb{U}_3$ into $\g_0$-irreducibles. For $\mathbb{U}_1=\mathbb{T}^\C$ this was done before Corollary \ref{c3}, whence
$$ \mathfrak{C}_1=\mathbb{T}^{--}\oplus\mathbb{T}^{-+}_\text{traceless}
$$ with the complexification\footnote{Recall that $\odot$ denotes the Cartan product (the same as previous "traceless").}
$$ \mathfrak{C}_1^\C=(\Lambda^{0,2}\ot V_{1,0})\oplus(\Lambda^{1,1}\odot V_{1,0})+\op{Cc}.
$$ Next we decompose $\mathbb{U}_2$ into $\g_0$-irreducibles. For $n\ge4$ we have:
\com{ In complexification
$$ (\Lambda^2\g_-^*\otimes\g_0)^\C= (\Lambda^{2,0}+\Lambda^{1,1}+\Lambda^{0,2})\otimes(\Lambda^{1,0}\otimes V_{1,0}) + \operatorname{Cc}
$$ and we compute for $n\ge4$:
\begin{align*}
\Lambda^{2,0} \otimes \Lambda^{1,0} \otimes V_{1,0} &= ( \Lambda^{2,0} \odot \Lambda^{1,0} \odot V_{1,0}) \oplus 2 \Lambda^{2,0} \oplus S^{2,0}\\ &\hspace{112pt} \oplus (\Lambda^{3,0} \odot V_{1,0}) + \operatorname{Cc},\\
\Lambda^{1,1} \otimes \Lambda^{1,0} \otimes V_{1,0} &= (\Lambda^{2,1} \odot V_{1,0}) \oplus (S^{2,1} \odot V_{1,0}) \oplus 2\Lambda^{1,1}+ \operatorname{Cc},\\
\Lambda^{0,2} \otimes \Lambda^{1,0} \otimes V_{1,0} &= (\Lambda^{1,2} \odot V_{1,0}) \oplus \Lambda^{0,2} + \operatorname{Cc}.
\end{align*} Hence, we get
\begin{align*} (\Lambda^2\g_-^*\otimes\g_0)^\C&=[(\Lambda^{2,0}\odot\Lambda^{1,0}\odot V_{1,0})\oplus 3\,\Lambda^{2,0}\oplus S^{2,0}\oplus(\Lambda^{3,0}\odot V_{1,0}) \\ &\hspace{-20pt}\oplus(\Lambda^{2,1}\odot V_{1,0})\oplus (S^{2,1}\odot V_{1,0})\oplus (\Lambda^{1,2}\odot V_{1,0}) + \operatorname{Cc}] \oplus 4\,\Lambda^{1,1}.
\end{align*}
}
\begin{multline*} \!\!(\Lambda^2\g_-^*\ot\g_0)^\C= (\Lambda^{2,0}\oplus\Lambda^{1,1}\oplus\Lambda^{0,2})\otimes(\Lambda^{1,0}\odot V_{1,0}+\C) + \op{Cc}\\ =[(\Lambda^{2,0}\odot\Lambda^{1,0}\odot V_{1,0})\oplus 3\,\Lambda^{2,0}\oplus S^{2,0}\oplus(\Lambda^{3,0}\odot V_{1,0})\oplus \\ (\Lambda^{2,1}\odot V_{1,0})\oplus (S^{2,1}\odot V_{1,0})\oplus (\Lambda^{1,2}\odot V_{1,0}) + \op{Cc}] \oplus 4\,\Lambda^{1,1}.
\end{multline*} Every term in $[\dots]$ together with its complex conjugate gives a real irreducible submodule ($\Lambda^{1,1}$ is already real irreducible) that we denote successively (not counting multiplicity) by $\mathbb{K}_1,\dots,\mathbb{K}_8$, and we get:
$$ \mathbb{U}_2= \underbrace{\mathbb{K}_1\oplus3\mathbb{K}_2\oplus\mathbb{K}_3\oplus\mathbb{K}_4\oplus\mathbb{K}_5 \oplus\mathbb{K}_6\oplus\mathbb{K}_7}_{\mathbb{U}_2^c} \oplus \underbrace{4\mathbb{K}_8}_{\mathbb{U}_2^r}.
$$ We will not need the decomposition of $\mathbb{U}_3$.
The module $\mathbb{U}_2$ is mapped by $\p^*$ onto the module
$$ (\Lambda^1\g_-^*\ot\g_1)^\C= [S^{2,0}\oplus\Lambda^{2,0} +\op{Cc}] \oplus 2\,\Lambda^{1,1}= (\mathbb{K}_2\oplus\mathbb{K}_3\oplus2\mathbb{K}_8)^\C.
$$ Consequently we get the following decomposition into $\g_0$-irreps:
\begin{equation}\label{p*2} \op{Ker}(\p^*)\cap\mathbb{U}_2= \mathbb{K}_1\oplus2\mathbb{K}_2\oplus\mathbb{K}_4\oplus\mathbb{K}_5\oplus\mathbb{K}_6 \oplus\mathbb{K}_7\oplus2\mathbb{K}_8.
\end{equation} The action of $\mathfrak{p}_+$ is trivial on the first factor of $\La^2\g_-^*\ot\g$, and so the restricted action on $\mathfrak{C}$ maps $\mathbb{T}^{--}$ to $\mathbb{K}_2\oplus\mathbb{K}_7$ -- notice that this $\mathbb{K}_2$ belongs to $\op{Ker}(\p^*)$ by $\mathfrak{p}_+$-equivariance of $\p^*$, so it is one of the terms in \eqref{p*2}. Also, $\mathfrak{p}_+$ maps $\mathbb{T}^{-+}_\text{traceless}$ to $\mathbb{K}_5\oplus\mathbb{K}_6\oplus2\mathbb{K}_8$, but the latter (double) term differs from the similarly named terms in \eqref{p*2}. To distinguish these denote $2\tilde{\mathbb{K}_8}=(\mathfrak{p}_+\cdot\mathfrak{C}_1)\cap4\mathbb{K}_8\subset\mathbb{U}_2$ ($\op{Ker}(\p^*)\cap2\tilde{\mathbb{K}_8}=0$). Then
\begin{multline*} \mathfrak{C}_2= (\op{Ker}(\p^*)\cap(\mathbb{K}_1\oplus3\mathbb{K}_2\oplus\dots\oplus\mathbb{K}_7))\oplus ((\mathfrak{p}_+\cdot\mathfrak{C}_1)\cap4\mathbb{K}_8)\\ =\mathbb{K}_1\oplus2\mathbb{K}_2\oplus\mathbb{K}_4\oplus\mathbb{K}_5\oplus\mathbb{K}_6 \oplus\mathbb{K}_7\oplus2\tilde{\mathbb{K}_8}.
\end{multline*} Now $\mathfrak{p}_+$ maps $\mathfrak{C}_1$ to $\mathfrak{C}_2$, and obviously it maps the latter to $\mathfrak{C}_3$. Therefore we conclude invariance with respect to $P=G_0\ltimes\mathfrak{p}_+$ for $n\ge4$.
For $n=3$ we have $\mathbb{K}_2=\mathbb{K}_4$ as $A_2$-modules (ignoring $\mathfrak{z}(\g_0)$). So only one multiplicity changes in the decomposition of $\mathfrak{C}_2$. For $n=2$ more terms in the above decompositions change/disappear, but the arguments persist and the conclusion is not altered.
\end{proof}
\begin{rk} A normalization different from the standard $\p^*\kappa=0$ was used previously by D.Fox in \cite{F} (our normalization differs from his).
\end{rk}
\begin{lem}\label{LtrC2} The subspace $\mathfrak{C}_2$ is complementary to $\op{Im}(\p)\subset\Lambda^2\mathfrak{g}_-^*\ot\g_0$.
\end{lem}
\begin{proof} Since $\op{Ker}(\p^*)\cap\mathbb{U}_2$ is complementary to $\op{Im}(\p)\cap\mathbb{U}_2$, it is enough to show that $(2\tilde{\mathbb{K}_8})\cap\op{Im}(\p)=0$. For this, because $\p^2=0$, it is enough to show that the map $\p:2\tilde{\mathbb{K}_8}\to\Lambda^3\mathfrak{g}_-^*\ot\g_{-}$ is injective.
An element $\z$ of this module has the form $\vp_{1,0}\ot\vp_{0,1}\ot(a\epsilon^{1,0}+b\epsilon^{0,1})$, where $\vp_{1,0}\in\La^{1,0}$ is some element (assume nonzero), $\epsilon^{1,0}\in\La^{1,0}\ot V_{1,0}$ is the identity, and similar for $\vp_{0,1},\epsilon^{0,1}$, while $a,b$ are some numbers.
For vectors $u^{1,0},v^{1,0},w^{0,1}\in V^\C=V_{1,0}\oplus V_{0,1}$ of the indicated type
$$ \p\z(u^{1,0},v^{1,0},w^{0,1})= a\cdot (\vp_{1,0}(v^{1,0})u^{1,0}-\vp_{1,0}(u^{1,0})v^{1,0})\,\vp_{0,1}(w^{0,1}).
$$ If this is zero for all choices $u^{1,0},v^{1,0},w^{0,1}$, then $a=0$. Similarly, substituting $u^{1,0},v^{0,1},w^{0,1}$ we obtain $b=0$.
\end{proof}
Now comes the main result of this section (which also solves the equivalence problem for general c-projective structures).
\begin{theorem} There is an equivalence of categories between c-projective structures $(M,J,[\nabla])$ and parabolic geometries of type $\op{SL}(n+1,\C)_\R/P$ with the curvature normalized by the ($P$-invariant) condition $\kappa\in\mathfrak{C}$.
\end{theorem}
\begin{proof} Given a pair $(J,[\nabla])$ we first consider the reduction $\mathcal{G}_0$ of the first frame bundle $\mathcal{F}_M$ corresponding to the choice of $J$. Next we construct the full frame bundle
$\mathcal{G}=\cup_{u\in\mathcal{G}_0}\mathcal{G}_u$, where
$$ \mathcal{G}_u=\{\theta(u)+\gamma^\nabla(u):\nabla\in[\nabla], \pi_1(T_\nabla)=\pi_2(T_\nabla)=\pi_5(T_\nabla)=0\}
$$ and $\theta=\omega_{-1}\in\Omega^1(\mathcal{G}_0,\g_{-1})$ is the soldering form, $\gamma^\nabla=\omega_0\in\Omega^1(\mathcal{G}_0,\g_0)$ is the principal connection corresponding to $\nabla$. The topology and the manifold structure on $\mathcal{G}$ is induced naturally (through the Weyl structures \cite{CS} corresponding to Weyl connections $\nabla$).
By construction, $\mathcal{G}$ is equipped with $G_0$-equivariant 1-form $\omega_{-1}+\omega_0:T_u\mathcal{G}\to\g_{-1}\oplus\g_0$. We extend it in a $P$-equivariant way to a Cartan connection $\omega\in\Omega^1(\mathcal{G},\g)$, however we have to fix the normalization. As usual this is done by the curvature function $\kappa:\mathcal{G}\to\Lambda^2\g_{-}^*\ot\g$ corresponding to the curvature form $K=d\omega+\frac12[\omega,\omega]$. Notice that grading 1 part $\kappa_1$ of the curvature, corresponding to $d\omega_{-1}(\xi,\eta)+[\omega_{-1}(\xi),\omega_0(\eta)]+[\omega_0(\xi),\omega_{-1}(\eta)]$, does not involve $\omega_1$, while the grading 2 part $\kappa_2$, corresponding to $d\omega_0(\xi,\eta)+[\omega_{-1}(\xi),\omega_1(\eta)]+[\omega_0(\xi),\omega_0(\eta)] +[\omega_1(\xi),\omega_{-1}(\eta)]$, is affine in $\omega_1$. We can use this fact and Lemma \ref{LtrC2} to construct $\omega_1$ by the condition $\kappa_2\in\mathfrak{C}_2$.
Indeed, the Kostant co-differential is the left-inverse of the Lie algebra cohomology differential $\p$ (= Spencer differential $\delta$) at the indicated place in the complex
$$ 0\to\g_1\ot\g_{-1}^*\stackrel{\p}\longrightarrow\g_0\ot\La^2\g_{-1}^*\to\g_{-1}\ot\La^3\g_{-1}^*\to0.
$$ The space $\op{Ker}(\p^*)$ is complementary to $\op{Im}(\p)\subset\g_0\ot\La^2\g_{-1}^*\ni\kappa_2$ and since a change of (Weyl) connection is equivalent to a change of $\kappa_2$ by $\p\psi$, where $\psi\in\g_1\ot\g_{-1}^*$, we can achieve $\kappa_2\in\op{Ker}(\p^*)$ precisely as in the normal case, and $\omega_1$ is (canonically) fixed.
This construction of $\mathcal{G}$ and $\omega=\omega_{-1}+\omega_0+\omega_1$ is clearly functorial implying the equivalence claim to one side.
To the other side, if we have a Cartan geometry $(\mathcal{G},\omega)$ of the type $\op{SL}(n+1,\C)_\R/P$, then we read off $J$ from $\mathcal{G}_0$ and sections of $\mathcal{G}\to\mathcal{G}_0$ determine the class of connections $\nabla$. A change of such section is equivalent to a c-projective change of connection as in Lemma \ref{L2}.
\end{proof}
\begin{rk} It was noticed in \cite{H} that normality of the Cartan connection $\omega$ implies minimality of $\nabla$, i.e.\ $T_\nabla=T_\nabla^{--}=\frac14N_J$ or equivalently $T_\nabla^{++}=0$, $T_\nabla^{-+}=0$. On the other hand, for c-projective structures $(J,[\nabla])$ with minimal $\nabla$ references \cite{Y,CEMN} provide construction of the normal Cartan connection $\omega\in\Omega^1(\mathcal{G},\g)$. Thus the above equivalence of categories restricts to equivalence of (sub-)categories between c-projective structures $(J,[\nabla])$ with minimal $\nabla$ and normal parabolic geometries of type $\op{SL}(n+1,\C)_\R/P$.
\end{rk}
\section{The general submaximal symmetry dimension.}\label{S5}
Here we derive the submaximal symmetry dimension for general c-projective structures. In this case, we additionally have the invariant part of the torsion that we denote by $\kappa_\text{IV}=\pi_4(T_\nabla)\in\mathbb{T}^{-+}_\text{traceless}$. This is the obstruction for c-projective geometry to be normal/minimal.
Flatness, i.e.\ local isomorphism to $(\C P^n,J_\text{can},[\nabla^\text{FS}])$, is characterized by: $\kappa_\text{I}=0,$ $\kappa_\text{II}=0$, $\kappa_\text{III}=0$, $\kappa_\text{IV}=0$. This system is $P$-invariant, as follows from the proof of Proposition \ref{P3} (but its second term $\kappa_\text{II}$ is no longer $P$-invariant: under $\mathfrak{p}_+$ action it changes by a derivative of $\kappa_\text{IV}$).
The method developed in \cite{KT} extends to this situation and we shall show that the bound on submaximal symmetry dimension persists.
\begin{Proof}{Proof of Theorem \ref{Thm1}} If $\kappa_\text{IV}=0$, then the connection is minimal and the estimate from above on the submaximal symmetry dimension $\mathfrak{S}$ follows from Section \ref{S2}:
$$
\mathfrak{S}\leq\mathfrak{U}:=\max\{\dim(\fa^\phi)| 0\neq\phi\in\mathbb{V}_\text{I}\oplus\mathbb{V}_\text{II}\oplus\mathbb{V}_\text{III}\}.
$$
Assume now that $\kappa_\text{IV}$ is a non-zero element in $\mathbb{T}^{-+}_\text{traceless}$. In this proof this module will be considered as a completely reducible $P$-submodule of $\La^2(\g/\mathfrak{p})^*\otimes(\g/\mathfrak{p})=\mathbb{U}/(\mathbb{U}_2\oplus\mathbb{U}_3)$, so that the $P$-action reduces to the $G_0$-action ($\mathfrak{p}_+$ acts trivially).
Let us notice that normality condition is not crucial for the universal upper bound on the symmetry dimension in \cite[Section 2.4]{KT}. The essential step is the reduction of $\g_0$ to the annihilator of the (harmonic) curvature and its Tanaka prolongation, so it can be generalized (see also \cite[Theorem~2]{K$_5$}). Thus, replacing the harmonic curvature with $\kappa_\text{IV}$, this leads to the submaximal symmetry dimension $\hat{\mathfrak{S}}$ for general c-projective structures:
$$
\hat{\mathfrak{S}}\leq\hat{\mathfrak{U}}:=\max\bigl\{\mathfrak{U},\max\{\dim(\fa^\phi)| 0\neq\phi\in\mathbb{T}^{-+}_\text{traceless}\}\bigr\}.
$$
The complexification of the module $\mathbb{T}^{-+}_\text{traceless}$ is $\mathbb{W}\oplus\overline{\mathbb{W}}$, where the lowest weight vector $\phi_0\in\mathbb{W}$ is $e_{\a_1}\we e_{\overline{\a}_1}\ot e_{-\a_1-\dots-\a_n}$. The annihilator of $\phi_0+\overline{\phi_0}$ in $\g_0$ is equal to
\[
\mathfrak{a}_0=\left(\begin{array}{c|cccccc}
a_0 & 0 & 0 & 0 & \cdots & 0 & 0\\ \hline
0 & a_1 & 0 & 0 & \cdots & 0 & 0\\
0 & * & a_2 & * & \cdots & * & 0\\
0 & * & * & a_3 & \cdots & * & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & 0\\
0 & * & * & * & \cdots & a_{n-1} & 0\\
0 & * & * & * & \cdots & * & a_n\\ \end{array} \right) \qquad\quad\begin{matrix}{}\\\\ a_0+\dots+a_n=0\\ {}\\ {}\\ a_1+\overline{a}_1=\overline{a}_0+a_n {}\\ {}\\ {}\end{matrix}
\] and so $\dim\mathfrak{a}_0=2(n-1)^2+2$.
In Section \ref{S2}, Propositions \ref{P:lw-vec} and \ref{P:PR} can be applied to $\mathbb{T}^{-+}_\text{traceless}$, i.e.\ the normality condition is not essential. Hence, we have the prolongation rigidity phenomenon, and $\mathfrak{a}^{\phi_0}=\g_-\oplus\mathfrak{a}_0$.
Thus the symmetry dimension does not exceed $\dim\mathfrak{a}^{\phi_0}=2n^2-2n+4$. Since this does not exceed the maximal bound $\mathfrak{U}$ of the three pure curvature types in Theorem \ref{Thm2}, the conclusion of Theorem \ref{Thm1} follows. \qed
\end{Proof}
If we assume $N_J=0$, and so eliminate the spontaneous growth of submaximal dimension for $n=3$ (with the winning type III), then the submaximal dimension is $2n^2-2n+4$ for all $n\ge2$. This bound persists even in the non-minimal case:
\begin{prop} For the general c-projective structure with $\kappa_\textrm{IV}\not\equiv0$ the symmetry dimension does not exceed $2n^2-2n+4$. This bound is sharp.
\end{prop}
\begin{proof} The upper bound follows from the above proof, so we just need to prove realizability, i.e.\ to construct {\it a model\/}.
We take $J=i$ and use the complex notations. Take $\Gamma^2_{\bar11}=\Gamma^{\bar2}_{1\bar1}=1$ and all other Christoffel symbols zero (in particular, $\Gamma^2_{1\bar1}=\Gamma^{\bar2}_{\bar11}=0$).
Its torsion $T_\nabla=dz^1\we d\overline{z^1}\ot(\p_{\overline{z^2}}-\p_{z^2})$ has only $\mathbb{T}^{-+}_\text{traceless}$-component non-zero and its curvature $R_\nabla$ vanishes.
The c-projective symmetries are the real and imaginary parts of the following (linearly independent) complex-valued vector fields:
$$ \p_{z^i},\quad z^i\p_{z^j}\ (i\ne2,j\ne1),\quad z^1\p_{z^1}+z^2\p_{z^2}+\overline{z^2}\p_{\overline{z^2}}.
$$ The totality of these is $2n^2-2n+4$ as required.
\end{proof}
\section{C-projective structures: the metric case.}\label{S6}
The goal of this and the next section is to prove Theorem \ref{Thm3}. In this section we recall the necessary background on metric c-projective structures and derive a useful estimate involving the degree of mobility; then in the next we give the proof and further specifications.
Two pseudo-K\"ahler metrics $g$ and $\tilde g$ underlying the same complex structure $J$ are called c-projectively equivalent if their Levi-Civita connections $\nabla=\nabla^g,\tilde\nabla=\nabla^{\ti g}$ are.
This can be expressed \cite{DM,MS} through the $(1,1)$-tensor
$$
A={\tilde g}^{-1}g\cdot\left|\frac{\det(\tilde g)}{\det(g)}\right|^{1/2(n+1)}:TM\to TM,
$$ where ${\tilde g}^{-1}$ is the inverse of ${\tilde g}$ ($\tilde g^{ik}\bar g_{kj}=\delta^i_j$), and ${\tilde g}^{-1}g$ is the contraction (${\tilde g}^{ik}g_{kj}$): The metrics $g$ and $\tilde g$ are c-projectively equivalent iff
\begin{equation*} (\nabla_XA)Y=g(X,Y)v_A+Y(\t_A)X+\oo(X,Y)Jv_A-JY(\t_A)JX,
\end{equation*} where $\oo(X,Y)=g(JX,Y)$, $\t_A=\frac14\op{tr}(A)$, $v_A=\op{grad}_g\t_A$.
\com{ We can also write it so
\begin{equation}\label{Me2} (\nabla_Z\hat A)(X,Y)=g(X,Z)\l_A(Y)+g(Y,Z)\l_A(X)+\oo(X,Z)\l_A(JY)+\oo(Y,Z)\l_A(JX),
\end{equation} where $\hat A(X,Y)=g(AX,Y)$, and $\l_A=\frac14d\op{tr}(A)$ ..
} In argument-free form\footnote{The same equation serves as a definition of a Hamiltonian 2-form $\omega(X,AY)$ corresponding to the endomorphism $A$, which attracted a recent interest \cite{ACG}. }
this writes (using symmetrizer by the last two arguments) so:
\begin{equation}\label{eqA} \nabla\hat A=2\op{Sym}_{2,3}\bigl(g\ot\l_A-\oo\ot J^*\l_A\bigr),
\end{equation} where $\hat A(X,Y)=g(AX,Y)$ and $\l_A=d\t_A$.
This linear overdetermined PDE system on the unknown $A$ has a finite-dimensional solution space denoted $\op{Sol}(g,J)$, and $\op{Id}\in\op{Sol}(g,J)$. {\it Degree of mobility\/} of the pair $(g,J)$ is defined as $D(g,J)=\dim\op{Sol}(g,J)$.
Let us denote by $\mathfrak{i}(g,J)$ the algebra of $J$-holomorphic infinitesimal isometries of $g$, by $\mathfrak{h}(g,J)$ the algebra of $J$-holomorphic vector fields that are homotheties for $g$. We will need the following estimate:
\begin{lem}\label{L3} For any pseudo-K\"ahler structure $(g,J)$ we have the inequality: $\dim\mathfrak{cp}(\nabla^g,J) \le\dim\mathfrak{h}(g,J)+D(g,J)-1$.
\end{lem}
This was discussed in \cite{MR$_2$}, but not formally stated. Though that paper was devoted only to the K\"ahler metrics the statement is true in general and the proof persists. Let us give a brief argument. The formula
$$ A=\phi(v)=g^{-1}\mathcal{L}_v(g)-\tfrac1{2(n+1)}\op{tr}(g^{-1}\mathcal{L}_v(g))\op{Id}
$$ by \cite{MR$_1$} defines the map
$$ \phi:\mathfrak{cp}(\nabla^g,J)\to\op{Sol}(g,J)
$$ and $\op{Ker}(\phi)=\mathfrak{i}(g,J)$. Moreover, if $\pi:\op{Sol}(g,J)\to\op{Sol}(g,J)/\R\cdot\op{Id}$ is the natural projection, then $\op{Ker}(\pi\circ\phi)=\mathfrak{h}(g,J)$. The claim follows from the rank theorem. \qed
\begin{cor} We have: $\dim\mathfrak{cp}(\nabla^g,J) \le \dim\mathfrak{i}(g,J)+D(g,J)$. \qed
\end{cor}
By \cite{DM,MS} the degree of mobility is bounded so:
$$ D(g,J)\le (n+1)^2,
$$ and the equality corresponds to spaces of constant holomorphic sectional curvature. The next biggest dimension, under the additional assumption that there is a projective non-affine symmetry \cite{MR$_2$,Mi}, is equal to
\begin{equation}\label{subMaxD} D_{\text{sub.max}}=(n-1)^2+1=n^2-2n+2.
\end{equation}
Another ingredient in our count is the estimate on the dimension of the isometry algebra of a K\"ahler structure. Clearly the maximal dimension is $\op{max}\bigl[\dim\mathfrak{i}(g,J)\bigr]=n^2+2n$.
\begin{prop}\label{P4} For a K\"ahler structure $g$ of non-constant holomorphic curvature we have: $\dim\mathfrak{i}(g,J)\le n^2+2$. The bound is sharp and attained, for example, for $M=\C\mathbb{P}^{n-1}\times\C\mathbb{P}^1$.
\end{prop}
\begin{proof} The isotropy algebra of the symmetry algebra is a proper subalgebra of $\mathfrak{u}(n)$, and so is reductive. All maximal proper subalgebras are $\mathfrak{u}(k)\oplus\mathfrak{u}(n-k)$, and it is clear that the maximal dimension is attained for $k=1$ or $k=n-1$. Since the Killing vector field is 1-jet determined, we conclude that the sub-maximal isometry dimension is $2n+(n-1)^2+1=\dim SU(n)+\dim SU(2)=n^2+2$.
\end{proof}
Now the required bound for $\dim\mathfrak{cp}(\nabla^g,J)$ in the K\"ahler case follows from the Proposition \ref{P4}, Lemma \ref{L3}, the well-known fact that $\mathfrak{h}(g,J)=\mathfrak{i}(g,J)$ if the isometry algebra acts with an open orbit\footnote{Indeed, if $\vp^*g=\lambda\cdot g$ for a homothety $\vp$ and $x$ is a fixed point with non-zero Riemann curvature tensor $R$, then equality
$\vp^*\|R\|^2=\lambda^{-2}\|R\|^2$ at $x$ implies $\lambda=1$.} and formula~(\ref{subMaxD}). However since the latter estimate has an additional assumption \cite{MR$_2$}, we will give in the next section another proof in the case there exists no essential projective symmetry (that is a non-affine symmetry for any choice of $g$ in the c-projective class).
\section{Submaximal metric c-projective structures}\label{S7}
By Corollary \ref{Cor} the algebra of c-projective symmetries of a K\"ahler metric is bounded in dimension by $2(n^2-2n+2)$. The next example shows that this bound is realizable by a metric c-projective structure. Indeed, consider the following pseudo-K\"ahler metric on $\C^n$ ($J=i$):
\begin{equation}\label{subMKh}
g=|z_1|^2\,dz_1\,d\bar{z_1}+dz_1\,d\bar{z_2}+d\bar{z_1}\,dz_2 +\sum_{k=3}^n\epsilon_k\,dz_k\,d\bar{z_k}
\end{equation} ($\epsilon_k=\pm1$). One easily checks that its Levi-Civita connection coincides with the connection $\nabla$ of type II given by formula (\ref{subCmax}), and so the sub-maximal complex (integrable $J$) projective structure is metrizable; in addition by varying the signs $\epsilon_k$ we can achieve any indefinite signature $(2p,2n-2p)$ for the pseudo-K\"ahler metric, $0<p<n$.
To finish the proof of Theorem \ref{Thm3} we have to show that no K\"ahler metric can have more than $2n^2-2n+3$ linearly independent c-projective symmetries unless it is c-projectively flat (i.e.\ has constant holomorphic sectional curvature). So we let $g$ be K\"ahler till the end of the proof. In the case there exists an essential c-projective symmetry (for at least one $g$ with $\nabla^g\in[\nabla]$) the claim follows from the estimates of Section~\ref{S6}.
Thus let us assume that for a K\"ahler metric $g$ the algebra of c-projective symmetries coincides with the algebra of (infinitesimal) symmetries of the pair $(\nabla^g,J)$: $\mathfrak{cp}(\nabla^g,J)=\mathfrak{aff}(g,J)$.
Fix a point $x\in M$ at which the curvature tensor $R$ does not vanish, and consider the holonomy algebra $\mathcal{H}_x$ of $\nabla^g$ at $x$. Since $\nabla^g$ preserves both $g$ and $J$, we have $\mathcal{H}_x\subset\mathfrak{u}(n)$. By Ambrose-Singer theorem $\mathcal{H}_x$ contains the endomorphisms $R(v\wedge w)$ for $v,w\in T_xM$, so $\mathcal{H}_x\neq0$.
By the (infinitesimal version of) de Rham decomposition theorem \cite{Ei}, we can split $T_xM=\oplus_{k=0}^m\Pi_k$, where $\Pi_0$ is the subspace of complex dimension $r_0<n$ where $R$ vanishes, and the other pieces are irreducible with respect to $\mathcal{H}_x$ (all $\Pi_k$ are $J$-invariant and so have even real dimensions $2r_k$). Any K\"ahler metric, which is equivalent to $g$ via a complex affine transformation, is obtained from it by a block-diagonal automorphism $\op{diag}(A_0,c_1,\dots,c_m)$, where $A_0\in\op{GL}(\Pi_0,J)$ and $c_k\neq0$ are constant multiples of the identity in the corresponding block.
Therefore the isotropy $\tilde{\mathfrak{a}}_0(x)$ of the complex affine symmetry algebra at $x$ consists of block-diagonal endomorphisms
$$ \op{diag}(\vp_0,\vp_1,\dots,\vp_m)\in\tilde{\mathfrak{a}}_0(x)\subset\mathfrak{aff}(g,J),
$$ where $\vp_0\in\mathfrak{gl}(\Pi_0,J)$ is a $\C$-linear matrix of complex size $r_0$ and $\vp_k\in\mathfrak{u}(\Pi_k,g,J)+\R\cdot\op{Id}$ is generated by a unitary transformation of the $k$-th block of complex size $r_k$ and the standard homothety (scaling of the metric $g$). Consequently we obtain
$$ \dim\tilde{\mathfrak{a}}_0(x)\leq 2r_0^2+\sum_{k=1}^m(r_k^2+1)\leq 2(n-1)^2+2,
$$ with equality iff the de Rham decomposition is $(n-1)\times1$ complex block and the smaller block is $\mathfrak{gl}(1,\C)=\mathfrak{u}(1)+\R$ (with a homothety). Next, the upper bound $2n+\tilde{\mathfrak{a}}_0(x)$ on the symmetry algebra $\mathfrak{aff}(g,J)$ is sharp only if the symmetry acts transitively. As recalled at the end of the last section, a Riemannian metric with an open orbit of the isometry group has no essential homotheties. Therefore we get the required estimate
$$ \dim\mathfrak{aff}(g,J)\leq 2n+ 2(n-1)^2+1=2n^2-2n+3.
$$ Due to the above arguments (and the fact that the homothety algebra of any non-flat connected 2-dimensional surface has $\dim\leq3$) it is now clear that this upper bound is achieved iff $(M,g)$ is $\C^{n-1}\times\Sigma^2$, where $\Sigma^2$ is the complex curve equipped with a $J$-compatible metric of constant curvature $K\neq0$. Theorem \ref{Thm3} is proved.
\begin{rk} Let us explain why the submaximal metric structure (\ref{subMKh}) is unique up to an isomorphism. We use transitivity of the symmetry algebra of the corresponding c-projective structure from Remark \ref{trans}.
Fix a point $o\in M$ ($J$ is also fixed). Then the metric $g$ and the curvature tensor $R_g$ are determined up to complex affine transformation on $T_oM$. In fact, there is an invariant null-complex line (and correspondingly the dual $\C$-line in the cotangent space) fixed by the isotropy, and if we fix them the isotropy $\mathfrak{a}_0$ determines $(g,R_g)$ at $o$ uniquely.
Now $(M,g)$ is a symmetric space and so is uniquely determined by the data $(g,R_g)$ at one point $o$.
\end{rk}
Let us now describe the structure of the symmetric space $M^4_0$ corresponding to the submaximal (with respect to c-projective transformations) metric $g$ of (\ref{subMKh}) (since the cases $n>2$ are obtained from this $M^4_0$ by direct product with $\C^{n-2}$, it suffices to study $n=2$ only). We have $M_0=G/H$ for some Lie groups $G\supset H$ because the symmetry acts transitively. There are 3 different presentation of $M_0$ as such quotient.
At first, we consider the Lie algebra of c-projective transformations $\mathfrak{s}$ with the isotropy $\mathfrak{a}_0$ of type II from Section \ref{S2}. This 8D algebra is solvable with the derived series of dimensions $(8,5,3,0)$. In addition, it has the $\Z_2$-grading: $\mathfrak{s}=\mathfrak{s}_{-}+\mathfrak{s}_{+}$, $[\mathfrak{s}_{\epsilon_1},\mathfrak{s}_{\epsilon_2}]\subset\mathfrak{s}_{\epsilon_1\epsilon_2}$. In a basis $\{e_i\}_{i=1}^4$ of $\mathfrak{s}_{+}=\mathfrak{a}_0$ and a basis $\{e_i\}_{i=5}^8$ of $\mathfrak{s}_{-}$ the structure equations write:
\begin{gather*} [e_1,e_3]=e_3,\ [e_1,e_4]=e_4,\ [e_1,e_5]=2e_5,\ [e_1,e_6]=3e_6,\ [e_1,e_7]=-e_7,\\ [e_2,e_5]=e_5,\ [e_2,e_6]=e_6,\ [e_2,e_7]=-e_7,\ [e_2,e_8]=-e_8,\\ [e_3,e_5]=e_6,\ [e_3,e_7]=e_8,\ [e_4,e_5]=e_6,\ [e_4,e_7]=-e_8,\ [e_5,e_7]=e_3.
\end{gather*} Thus $M_0=G^8_c/H^4_c$, where $G^8_c=\op{exp}(\mathfrak{s})$, $H^4_c=\op{exp}(\mathfrak{s}_{+})$.
Second, consider the group $G^6_k$ of holomorphic isometries, i.e.\ symmetries of the pseudo-K\"ahler structure $(g,J)$. This group is also solvable, with the derived series of dimensions $(6,5,3,0)$ and the Abelian stabilizer $H^2_k$. Again there is $\Z_2$-grading and the structure equations of the Lie algebra $\mathfrak{s}'$ in an adapted basis $\{e_i\}_{i=1}^2$ of $\mathfrak{s}'_{+}$, $\{e_i\}_{i=3}^6$ of $\mathfrak{s}'_{-}$ are:
\begin{gather*} [e_1,e_3]=-e_3,\ [e_1,e_4]=e_4,\ [e_1,e_5]=-e_5,\ [e_1,e_6]=e_6,\\ [e_2,e_5]=e_3,\ [e_2,e_6]=e_4,\ [e_5,e_6]=e_2.
\end{gather*} Thus $M_0=G^6_k/H^2_k$, where $G^6_k=\op{exp}(\mathfrak{s}')$, $H^2_k=\op{exp}(\mathfrak{s}'_{+})$.
Finally consider the isometry group $G^8_i$ of $g$ with the stabilizer $H^4_i\simeq\op{GL}(2,\R)$ (the center $e_4$ acts by homothety). The group is the semi-direct product $SL(2,\R)\ltimes\op{exp}(\mathfrak{r}_5)$, where $\mathfrak{r}_5$ is the 5D Lie algebra given with the basis $\{e_i\}_{i=4}^8$ given by the relations
$$ [e_4,e_5]=e_7,\ [e_4,e_6]=e_8,\ [e_5,e_6]=e_4.
$$ In the basis $\{e_i\}_{i=1}^3$ of $\mathfrak{sl}(2)$: $[e_1,e_2]=-2e_2$, $[e_1,e_3]=2e_3$, $[e_2,e_3]=e_1$, the representation is given by
$$
[e_2,e_5]=e_6,\ [e_2,e_7]=e_8,\ [e_3,e_6]=-e_5,\ [e_3,e_8]=-e_7
$$ and the action of $e_1$ is induced. The $\Z_2$-grading of the resulting Lie algebra $\mathfrak{s}''=\op{Lie}(G^8_i)$ is $\mathfrak{s}''_{+}=\langle e_i\rangle_{i=1}^4$, $\mathfrak{s}''_{-}=\langle e_i\rangle_{i=5}^8$, and $M_0=G^8_i/H^4_i$.
\appendix
\section{Some details on the model for type III, $n=2$}\label{S.A}
We found the c-projective structure $(M^4,J,[\nabla])$ of curvature type III with 8 symmetries using parabolic geometry machinery and Cartan's equivalence method. Our computations exploited the packages {\tt DifferentialGeometry} and {\tt Cartan} in Maple.
\subsection{Structure equations}
We first derived the structure equations for the {\em normal} Cartan geometry $(\mathcal{G} \to M,\omega)$ of type $(G, P)$, $n=2$. For the basics of parabolic geometry machinery we refer to \cite{CS}.
The curvature 2-form $K = d\omega + \frac{1}{2}[\omega,\omega] \in \Omega^2(\mathcal{G};\g)$ yields the curvature function $\kappa:\mathcal{G}\to\Lambda^2 \mathfrak{p}_+ \otimes\mathfrak{g}$ via the Killing form identification $(\mathfrak{g}/\mathfrak{p})^*=\fg_1=\mathfrak{p}_+$. Normality means $\partial^*\kappa=0$, where $\partial^*: \Lambda^2 \mathfrak{p}_+ \otimes \mathfrak{g} \to \mathfrak{p}_+ \otimes \mathfrak{g}$ is the Kostant codifferential. Call $\mathbb{K} = \ker\partial^*$ the {\em curvature module}. The unique grading element $Z \in \g_0 \cong \mathfrak{gl}(2,\mathbb{C})$ stratifies $\mathbb{K}$ into homogeneities, and we decompose each into $\g_0$-irreps:
\begin{align*}
\begin{array}{clc}
\mbox{Homogeneity} & \mbox{$\g_0$-module decomposition} & \dim\\ \hline
+3 & V_{4(1)} \oplus V_{4(2)} \oplus V_{4(3)} \oplus V_{12} & 24\\
+2 & V_{16} \oplus V_{8} \oplus V_{6} \oplus V_{2} & 32\\
+1 & V_{4(4)} & 4
\end{array}
\end{align*} Here, $\dim_\mathbb{R}(V_i) = i$. The {\em harmonic} curvature corresponds to the modules $V_{4(1)}, V_{16}, V_{4(4)}$, which comprise $\ker(\Box)$, where $\Box = \partial \partial^* + \partial^* \partial$ is the Kostant Laplacian (and $\partial$ is the Lie algebra cohomology differential).
Let $E_{jk}$ denote the $3\times 3$ matrix with a 1 in the $(j,k)$ position and 0 otherwise, and let $F_{jk} = i E_{jk}$. Decompose into real and imaginary parts, $\omega = \theta + i\eta \in \Omega^1(\mathcal{G};\g)$, and impose a trace-free condition, say $\omega_{22} = -\omega_{11} - \omega_{33}$. The structure equations are
\begin{align*}
d\theta_{jk} &= -\theta_{jl} \wedge \theta_{lk} + \eta_{jl} \wedge \eta_{lk} + \Re(K_{jk})\\
d\eta_{jk} &= -\eta_{jl} \wedge \theta_{lk} - \theta_{jl} \wedge \eta_{lk} + \Im(K_{jk}).
\end{align*} The 2-forms $K_{jk}\in\mathbb{K}$ are obtain by via the duality
\[ E_{12}\mapsto\theta_{21}, \quad F_{12}\mapsto-\eta_{21}, \quad E_{13}\mapsto\theta_{31}, \quad F_{13}\mapsto-\eta_{31}.
\] In particular,
$$ K_{21} = (A_1 - i A_2) \,\overline{\omega_{21}} \wedge \overline{\omega_{31}} + \dots,\quad K_{31} = (A_3 - i A_4) \,\overline{\omega_{21}} \wedge \overline{\omega_{31}} + \dots
$$ where $A_1,...,A_4:\mathcal{G}\to V_{4(4)}$, and similarly for the coefficients $B_1,...,B_{32}$, $C_1,...,C_{24}$ of the other modules $V_{j(k)}$. The first structure equations are
\begin{align*}
dA_1 &= (\theta_{33} - \theta_{11}) A_1 + (5 \eta_{11} + \eta_{33}) A_2 - \theta_{23} A_3 - \eta_{23} A_4 + \alpha_1\\
dA_2 &= - (5 \eta_{11}+\eta_{33}) A_1 + (\theta_{33} - \theta_{11}) A_2 + \eta_{23} A_3 - \theta_{23} A_4 + \alpha_2\\
dA_3 &= - \theta_{32} A_1 - \eta_{32} A_2 - (2 \theta_{11}+\theta_{33}) A_3 - (-4 \eta_{11}+\eta_{33}) A_4 + \alpha_3\\
dA_4 &= \eta_{32} A_1 - \theta_{32} A_2 - (4 \eta_{11}-\eta_{33}) A_3 - (2 \theta_{11}+\theta_{33}) A_4 + \alpha_4
\end{align*}
where $\alpha_i$ are semi-basic 1-forms, i.e.\ linear combinations of base forms $\theta_{21}, \eta_{21}, \theta_{31}, \eta_{31}$. Writing $d\alpha_i = \delta A_i + \alpha_i$, the $\delta A_i$ terms describe the infinitesimal vertical change of these coefficients under the $P$-action.
\subsection{Derivation of the model}
We follow the method introduced by Cartan \cite{C$_2$} (for a more detailed explanation see \cite{DMT}) to normalize curvature under the (vertical) action of the structure group.
In our case if the $N_J\neq0$, we obtain the normalization
\[
A_1 = 1, \quad A_2 = A_3 = A_4 = 0,
\] forcing the relations
\[
\theta_{33} = \theta_{11} - \alpha_1, \quad \eta_{33} = -5 \eta_{11} + \alpha_2, \quad
\theta_{32} = \alpha_3, \quad \eta_{32} = -\alpha_4.
\] The residual structure group is now 8-dimensional and still contains $P_+$. On coefficients in the $V_6$ and $V_2$ modules $P_+$ induces translation actions on four coefficients, these can all be normalized to zero. This reduces the bundle to $(\mathcal{E} \to M, S)$, where $S \subset P$ is a 4-dimensional subgroup, and $\mathcal{E}$ comes equipped with:
\begin{itemize}
\item an $S$-equivariant coframing: $\omega_{21}, \omega_{31}, \omega_{11}, \omega_{23}$;
\item a vertical distribution $\mathcal{V}=\langle\omega_{21}, \omega_{31}\rangle^\perp$;
\item an $S$-connection $\gamma$ with horizontal distribution $\mathcal{H}=\langle\omega_{11}, \omega_{23}\rangle^\perp$.
\end{itemize}
The symmetry algebra of the c-projective structure is bounded by $8=\dim\mathcal{E}$. For this bound to be sharp, $S$ must act trivially on curvature coefficients. This forces the vanishing of many parts of the curvature function. Indeed, after resolving all integrability conditions, we found that there is a {\em unique} model with 8 symmetries, and for it $\kappa|_{\mathcal{E}}$ has non-trivial components only in $V_{4(4)}$ and $V_8$. Here are the structure equations:
\begin{align*}
d\omega_{21} &= -\omega_{23} \wedge \omega_{31} + 3 \overline{\omega_{11}} \wedge \omega_{21} + 6 \overline{\omega_{21}} \wedge \overline{\omega_{31}}\\
d\omega_{31} &= 6i \mathfrak{Im}(\omega_{11}) \wedge \omega_{31} \\
d\omega_{11} &= 3 \omega_{31} \wedge \overline{\omega_{31}} \\
d\omega_{23} &= -27 \omega_{21} \wedge \overline{\omega_{31}} + 3\omega_{11} \wedge \omega_{23} - 12 i\mathfrak{Im}(\omega_{11}) \wedge \omega_{23}
\end{align*} The embedding relations and the structure algebra are:
\begin{align*}
& \omega_{12} = \omega_{32} = 0, \quad
\omega_{13} = 15 \overline{\omega_{31}}, \quad
\omega_{33} = \omega_{11} - 6i\mathfrak{Im}(\omega_{11})
\end{align*}
\begin{equation}\label{SAM}
\begin{pmatrix}
a_0 + i a_1 & 0 & 0 \\
0 & -2a_0 + 4ia_1 & b_0 + i b_1\\
0 & 0 & a_0 - 5ia_1
\end{pmatrix}, \quad a_i, b_i \in \mathbb{R}.
\end{equation}
Let $W_{jk} = e_{jk} + i f_{jk}$ be the dual framing on $\mathcal{E}$. Then
\begin{align*} \label{E:J}
f_{21} \otimes \theta_{21} - e_{21} \otimes \eta_{21} + f_{31} \otimes \theta_{31} - e_{31} \otimes \eta_{31}
\end{align*} is pullback of the almost complex structure $J$ on $M$. The minimal complex connection $\nabla$ can be read off from the principal connection $\gamma$.
Viewing $TM \cong \mathcal{E} \times_S V\simeq\mathbb{R}^4$ we integrate the structure equations and obtain the model in coordinates as indicated in Section \ref{S3}.
\subsection{Deformation approach}
Another approach to get a sub-maximal model is to deform a graded sub-algebra of $\g$ by preserving its filtered Lie algebra structure, but destroying the grading \cite{K$_3$,KT}.
In our case the graded sub-algebra $\mathfrak{a}^\phi=\g_{-}\oplus\mathfrak{a}_0\subset\g$ has complex matrix representation ($\a=\a^1+i\a^2, \b=\b^1+i\b^2, \nu_k=\nu_k^1+i\nu_k^2\in\C$)
$$
\mathfrak{a}^\phi\ni A=\left(\begin{array}{c|cc}
\a & 0 & 0 \\ \hline
\nu_1 & 3\bar{\a}-2\a & 0\\
\nu_2 & \b & \a-3\bar{\a}
\end{array} \right)
$$ (notice lower-triangular form vs. the upper-triangular form for the $2\times2$ block of the structure algebra in (\ref{SAM}) operating with the highest weight vector; they are conjugate by interchanging indices 2,3) and we get basis by decomposition $A=\sum_{j=1}^2(\a^ja_j+\b^jb_j+\nu_j^1v'_j+\nu_j^2v''_j)$; $\g_{-}=\langle v'_1,v''_1,v'_2,v''_2\rangle$ has grade $-1$ and $\mathfrak{a}_0=\langle a_1,a_2,b_1,b_2\rangle$ has grade $0$.
The algorithm of deforming the Lie algebra structure on $\mathfrak{a}^\phi$ via the lowest weight vector here fails. However the deformation exists. To find it let us deform the structure constants respecting the filtration on $\mathfrak{a}^\phi$ (i.e.\ brackets $[\mathfrak{a}_0,\mathfrak{a}_0]$ are fixed, $[\g_{-},\mathfrak{a}_0]$ can be changed by $\mathfrak{a}_0$ and $[\g_{-},\g_{-}]$ can be changed by everything), the Jacobi identity imposed.
This deformation has several branches (some with other types curvature), one of which is ($\lambda$ is the deformation parameter):
\begin{gather*} [a_1,b_1] = -3b_1, \ [a_1,b_2] = -3b_2, \ [a_2,b_1] = 9b_2, \ [a_2,b_2] = -9b_1,\\ [a_1,v'_2] = -3v'_2, \ [a_1,v''_2] = -3v''_2, \ [a_2,v'_1] = -6v''_1, \ [a_2,v''_1] = 6v'_1, \\ [a_2,v'_2] = 3v''_2, \ [a_2,v''_2] = -3v'_2, \ [b_1,v'_1] = v'_2, \ [b_1,v''_1] = v''_2, \\ [b_2,v'_1] = v''_2, \ [b_2,v''_1] = -v'_2, \ [v'_1, v''_1] = 6 \lambda^2 a_2, \\ [v'_1,v'_2] = 6\lambda v'_2-27\lambda^2 b_1, \ [v'_1,v''_2] = -6\lambda v''_2-27\lambda^2 b_2, \\ [v''_1,v'_2] = -6\lambda v''_2 +27\lambda^2 b_2, \ [v''_1,v''_2] = -6\lambda v'_2 -27\lambda^2 b_1.
\end{gather*} (notice that in the non-graded case $\lambda\neq0$, we can rescale $\lambda=1$).
These relations determine the Lie algebra $\mathfrak{f}$ with subalgebra $\mathfrak{k}=\langle a_1,a_2,b_1,b_2\rangle\simeq\mathfrak{a}_0$. The simply-connected Lie group $F$ of $\mathfrak{f}$ contains the Lie subgroup $K$ with $\op{Lie}(K)=\mathfrak{k}$.
As such we can take the normalizer in $F$ of $\g_{-}$ with respect to the adjoint action (for $\lambda\ne0$).
The homogeneous space $M^4=F/K$ has $F$-invariant (non-integrable) almost complex structure $J$ given by $Jv_j'=v_j''$. Moreover $M$ has $F$-invariant projective connection $[\nabla]$ given by the Cartan bundle construction \cite[Lemma 4.1.4]{KT}. Since for non-zero values of the parameters the symmetry algebra is non-graded, the obtained c-projective structure $(J,[\nabla])$ is not-flat ($\mathfrak{a}^\phi$ is not filtration-rigid, see \cite[Proposition 4.2.2]{KT}) and hence it is sub-maximal symmetric with the symmetry algebra $\op{Sym}([\nabla],J)=\mathfrak{f}$ of dimension 8.
\section{Uniqueness of the submaximal structures}\label{S.B}
Classification of submaximal symmetric structures can be an extrem\-ely difficult problem depending on the geometry in question\footnote{Maximal symmetric structures are unique in parabolic geometry, but description of all such for more general structures can be quite intricate.}. There does not exist any general result in this direction in the literature, but for c-projective structures we can confirm the uniqueness as follows.
\subsection{Classification of submaximal c-projective structures}
Submaximal c-projective structure of type II is unique up to an isomorphism. Indeed, by the result of Section \ref{S2} the stabilizer of the symmetry algebra (up to isomorphism) is equal to $\mathfrak{a}_0$. As explained in \cite{K$_3$,KT}, the symmetry algebra $\mathfrak{s}$ is filtered with the corresponding graded algebra being $\mathfrak{a}^\phi=\g_-\oplus\mathfrak{a}_0$.
The process of recovery of $\mathfrak{s}$ from its subalgebra $\mathfrak{a}_0$ and the action of $\mathfrak{a}_0$ on $\mathfrak{s}/\mathfrak{a}_0=\g_-$ is described as follows: one has to introduce indeterminate coefficients of the undetermined commutators and then constrain these coefficients by the Jacobi identity.
Though in general this is quite a complicated system of quadratic equations, in our case many linear equations that occur allow to resolve it. We used Maple to facilitate the heavy computation. All cases $n\ge2$ follow the same pattern, and as the output we obtain a 1-parameter Lie algebra structure $\tilde{\mathfrak{s}}(t)$.
If the parameter $t=0$ we get the graded algebra $\mathfrak{a}^\phi$, while the case $t\neq0$ reparametrizes to $t=1$ corresponding to the symmetry $\mathfrak{s}$ of (\ref{subCmax}). Thus there are only two cases to consider.
Contrary to the non-exceptional parabolic geometries the possibility of graded symmetry algebra does not imply flatness of the c-projective structure (compare \cite[Example 4.4.3]{KT}).
Therefore on the next step we look for c-projective structures invariant with respect to $\mathfrak{a}^\phi=\tilde{\mathfrak{s}}(0)$ and $\mathfrak{s}=\tilde{\mathfrak{s}}(1)$. Such structures are unique and are: flat and (\ref{subCmax}) in the first/second cases respectively. This proves the claim for type II structures.
Submaximal c-projective structure of type I is also unique up to an isomorphism. This is actually a holomorphic version of the uniqueness of Egorov's submaximal (real) projective structure. Such result was expected by experts, but Egorov's paper \cite{E$_1$} does not contain an indication of this result. Therefore we have verified it directly by the method described for type II (note that our computation applies to both smooth real and complex analytic cases).
Here everything is similar, but the pattern holds for the cases $n>2$ and the Maple computation asserts the result. The case $n=2$ is an exception, and we refer the reader to \cite{Tr,K$_2$} for the discussion of the smooth situation, in which case there are two submaximal models. The analytic case is similar but the two models glue because $\pm$ arising in the smooth case can be renormalized over $\C$. As the conclusion we obtain unicity for type I submaximal structures.
A computer verification shows that submaximal c-projective structure of type III is also unique up to an isomorphism for $n>2$, but the case $n=2$ for type III is an exception, and here the uniqueness of the submaximal c-projective structure follows from the Cartan equivalence method as described in Appendix \ref{S.A}.
\subsection{Metrics with the submaximal c-projective symmetry}
Let us compute all metrics c-projectively equivalent to the pseudo-K\"ahler metric $g$ given by (\ref{subMKh}). These are precisely those metrics that solve the metrizability equation for the c-projective structure (\ref{subCmax}), and their number is equal to the dimension of the solution space of (\ref{eqA}), i.e.\ the degree of mobility of this metric: $D(g,J)=D_{\op{sub.max}}=(n-1)^2+1$.
To find these equivalent metrics let us compute the space of all parallel 1-forms:
\begin{equation}\label{qwe} dz_2, \dots, dz_n;\ d\bar{z}_2, \dots, d\bar{z}_n
\end{equation} (these already split into $(1,0)$ and $(0,1)$ type respective to $J$). Since $\mathfrak{cp}(\nabla^g,J)=\mathfrak{aff}(g,J)$ the required metrics are linear combinations of $g$ and $(1,1)$-type quadrics in the forms (\ref{qwe}) (the coefficient of $g$ in such combination has to be nonzero by nondegeneracy).
Indeed, dimension of the space of such combinations is $(n-1)^2+1$, and since this number equals $D_{\op{sub.max}}$, there exists no other metric that is complex affine equivalent to the metric $g$. Thus the general metric, c-projectively equivalent to $g$ (up to scaling) is equal to
$$
\hat{g}=|z_1|^2\,dz_1\,d\bar{z_1}+dz_1\,d\bar{z_2}+d\bar{z_1}\,dz_2 +\sum_{k,l=2}^nc_{kl}\,dz_k\,d\bar{z_l} \quad (c_{lk}=\overline{c_{kl}}),
$$ and we again confirm that a pseudo-K\"ahler metric $\hat{g}$ with submaximal number of c-projective symmetries cannot be of Riemannian signature.
{\bf\sc Acknowledgements.} We thank A.\,\v{C}ap for helpful discussion and the anonymous referee for pointing some inconsistencies in the initial version of the text. The research of B.K.\ and V.M.\ was supported by German DAADppp grant 50966389 and the Norwegian Research Council. The research of V.M.\ and D.T.\ was supported by Go8-DAAD grant 56203040 of Germany-Australia cooperation. D.T.\ is also supported by project M1884-N35 of the Austrian Science Fund (FWF).
\end{document} | arXiv |
Richmond surface
In differential geometry, a Richmond surface is a minimal surface first described by Herbert William Richmond in 1904. [1] It is a family of surfaces with one planar end and one Enneper surface-like self-intersecting end.
It has Weierstrass–Enneper parameterization $f(z)=1/z^{2},g(z)=z^{m}$. This allows a parametrization based on a complex parameter as
${\begin{aligned}X(z)&=\Re [(-1/2z)-z^{2m+1}/(4m+2)]\\Y(z)&=\Re [(-i/2z)+iz^{2m+1}/(4m+2)]\\Z(z)&=\Re [z^{m}/m]\end{aligned}}$
The associate family of the surface is just the surface rotated around the z-axis.
Taking m = 2 a real parametric expression becomes:[2]
${\begin{aligned}X(u,v)&=(1/3)u^{3}-uv^{2}+{\frac {u}{u^{2}+v^{2}}}\\Y(u,v)&=-u^{2}v+(1/3)v^{3}-{\frac {v}{u^{2}+v^{2}}}\\Z(u,v)&=2u\end{aligned}}$
References
1. Jesse Douglas, Tibor Radó, The Problem of Plateau: A Tribute to Jesse Douglas & Tibor Radó, World Scientific, 1992 (p. 239-240)
2. John Oprea, The Mathematics of Soap Films: Explorations With Maple, American Mathematical Soc., 2000
Minimal surfaces
• Associate family
• Bour's
• Catalan's
• Catenoid
• Chen–Gackstatter
• Costa's
• Enneper
• Gyroid
• Helicoid
• Henneberg
• k-noid
• Lidinoid
• Neovius
• Richmond
• Riemann's
• Saddle tower
• Scherk
• Schwarz
• Triply periodic
| Wikipedia |
pp 1–19 | Cite as
The Effect of Hot Treatment on Composition and Microstructure of HVOF Iron Aluminide Coatings in Na2SO4 Molten Salts
C. Senderowski
N. Cinca
S. Dosta
I. G. Cano
J. M. Guilemany
First Online: 08 July 2019
The paper deals with the hot corrosion performance of FeAl base intermetallic HVOF coatings in molten Na2SO4 at 850 °C in an isothermal process over the span of 45 h under static conditions. The test was validated with electron microscopy and compositional analyses in the cross-section area, as well as x-ray diffraction techniques. All the coatings were characterized by Al-depleted regions, intersplat oxidation and different stoichiometric ratios of iron aluminides. The results were discussed in relation to the formation of oxide scales on the surface after exposition to corrosive media, as well as heterogeneity and defects of the sprayed coatings. The Fe40Al (at.%) powder showed quite uniform phase distribution after spraying and preserved its integrity after corrosion test; the FeCr-25% + FeAl-TiAl-Al2O3 (wt.%) and Fe46Al-6.55Si (at.%) powders exhibited interface oxidation, with localized corrosion attacks proceeding through particle boundaries and microcrack networks with no evidence of Na and S penetration. FexAly alloys are susceptible to accelerated damage and decohesion of the coating, whereas the formation of sulfides is observed at certain points.
FeAl intermetallic hot corrosion thermal spray coatings
High-temperature oxidation is believed to be the major reason for the degradation of materials used at elevated temperatures, which in consequence leads to prolonged downtime of elements such as boilers and turbines utilized in power production (Ref 1-4). As an adequate countermeasure to the above-mentioned issues, intermetallics and Ni-based alloys have gained significant attraction as coating materials (Ref 1-3, 5-7).
Transition metal aluminides, mainly those based on Ni and Fe, are potentially applicable at high temperatures and provide a sufficient alternative to superalloys (Ref 4-6, 8-12). The alumina layer, formed on the surface of materials, is responsible for their excellent resistance to oxidizing, sulfiding and carburizing atmospheres even at temperatures exceeding 1000 °C (Ref 5, 10, 13).
However, while showing good strength and environmental stability, other aspects such as poor ductility and toughness at room temperature, mediocre creep strength, as well as fabrication difficulties have greatly hindered the introduction of intermetallics as industrial structural materials (Ref 14). Therefore, their commercial application in some fields is still a matter of concern (Ref 15).
The applications of iron aluminides are, for the most part, based on their excellent corrosion resistance at high temperatures in environments that cause damage to Fe-Cr-Ni steels and other alloys (Ref 4). They show higher resistance to sulfidation and carburizing atmospheres, as well as to molten nitrates and carbonate salts in relation to multiple different iron- or nickel-based alloys (Ref 16, 17). FeAl alloys have demonstrated particularly improved resistance to various molten salts leading to hot corrosion in heat exchanging systems, incinerators and burners. This pertains to such chemicals as potassium sulfate (K2SO4), vanadium pentoxide (V2O5), mixtures of sodium sulfate and vanadium pentoxide (Na2SO4-V2O5), chlorates and carbonates, all of which can inflict severe damage in the energy sector (Ref 18-24).
High resistance to hot corrosion is a matter of paramount importance in many branches of industry concerning the construction of boilers, internal combustion engines, gas turbines, fluidized bed combustion and industrial waste incinerators. The material degradation is determined by the confluence of high-temperature oxidation, hot corrosion and erosion processes (Ref 1-3, 5-7, 11, 12, 25, 26).
However, iron aluminide corrosion resistance extends to temperatures at which these alloys exhibit limited or poor mechanical strength. Therefore, in many cases, they may be better utilized as clads or coatings for anti-corrosion protection, owing to their limited strength at elevated temperatures (Ref 26-30).
Numerous thermal spray techniques, most notably plasma spraying (Ref 28, 31, 32), high-velocity oxy-fuel (HVOF) (Ref 17, 27, 28, 33-47) and D-gun spraying processes (Ref 26, 29, 48-59) are considered for Fe-Al intermetallic coating materials. In comparison with other industrially used coatings such as CVD, PVD and hard chromium plating, a much thicker coating can be obtained by thermal spraying, which is a prerequisite in the energy sector. High-velocity arc spraying process (HVAS), a technique used to deposit Fe-Al intermetallic and Fe-Al/WC protective coatings, was designed for evaporator pipes subjected to corrosive and erosive influence of vapor at 550 °C and serves as an example, especially through the prism of their application in the Chinese industry (Ref 60).
Among thermal spray techniques, the scientists, manufacturers and global investors show much interest in HVOF, a state-of-the-art thermal spray technology, which not only yields positive results, but also is relatively cheap (Ref 3, 17, 26, 28, 33-47).
Thermal spray iron aluminide coatings were previously tested in high-temperature gaseous environments (Ref 17, 36, 37, 47, 61), but to the best of the authors' knowledge, very few findings concerning their use under hot corrosion conditions were made (Ref 62). These authors report that no degradation (corrosion and wear) was noticed on the surface of the Fe-25%Al-Zr (wt.%) plasma and HVOF coatings sprayed onto low-carbon steel heat exchanger tubes, which were tested in a new industrial plant burning fuel of very poor quality. However, their research was not orientated toward the coating structures and corrosion evolution.
On the other hand, Singh Sidhu et al. 63 studied the corrosion of plasma-sprayed Ni3Al coatings in air and molten salt (Na2SO4-60%V2O5) at 900 °C on low-carbon steel substrates of extended application in boilers.
Other thermal spray coatings, widely studied in terms of hot corrosion protection resistance, are the case of plasma spraying of MCrAlY's in TBC systems for aviation gas turbines purposes, which notably reduces their longevity under severe conditions involving molten sulfate-vanadate deposits (Ref 64-68). These coatings can be alternatively produced by HVOF process, which utilizes high-pressure combustion of oxygen and fuel to obtain a relatively low temperature of a supersonic gas jet in comparison with plasma spraying. HVOF allows us to obtain denser and less oxidized coatings, which are more resistant to corrosion (Ref 69).
The growing interest in the promising properties of intermetallic alloys based on the Fe-Al equilibrium phase diagram contributed to the gradual development of the HVOF spraying technique, which proved useful for the production of such intermetallic coatings in terms of their practical application on various steel elements, exposed to corrosive and erosive environment in the energy sector (Ref 28, 33-47, 70-74). The focus in these works was mostly placed on the structural properties of Fe-Al coatings and their wear resistance under dry friction (in congruence with ASTM G99-03), abrasive wear (in accordance with ASTM G65-00) and erosive wear, along with the involvement of Al2O3 particles (Ref 44). Furthermore, the research involved the performance of Fe-Al coatings under high-temperature oxidation conditions at 900, 1000 and 1100 °C—for 4, 36 and 72 h, respectively, in the atmospheric air (Ref 27).
Usitalo et al. 75 conducted studies on laser re-melting of HVOF-sprayed Ni-50Cr, Ni-57Cr, Fe3Al, Ni-21Cr-9Mo coatings and reported that the above-mentioned coatings did not suffer from any corrosive damage, whereas sprayed coatings were penetrated by corrosive species.
Other HVOF and novel cold-spray coatings, such as Cr3C2-NiCr and WC-Co, are widely studied regarding their wear resistance behavior (Ref 1-3, 7, 25, 76) while great emphasis is placed upon hot corrosion-related applications. Iron aluminide intermetallics appear to provide interesting properties favorable to hot corrosion protection and also manifest wear resistance at high temperatures, providing competition to cobalt binder in WC-Co composites and Ni-based superalloys (Ref 5, 6, 10-12, 60, 77-81).
Different alloying elements in iron aluminide and their effect on the oxide scales development when exposed to harsh environments have been investigated (Ref 17, 38, 42, 72-74). In this regard, we propose the application of several feedstock iron aluminide powders obtained from different manufacturing routes.
Notably, Senderowski Ref (56) developed a new concept of nanocomposite Fe-Al intermetallic coatings created in situ during gas detonation spraying out of powder with compounds from the Fe-Al phase diagram, manufactured by the self-decomposition method (Ref 57). It was assumed that those powders would exhibit sufficient plastic susceptibility under the spraying test conditions, acceptable mechanical properties of the coatings and good stability of the structure during high-temperature heating. The shortlisted properties of these powders are mostly related to reduced brittleness caused by dynamic oxidation at high temperatures (especially above 500 °C), in the oxygen containing environment.
Particle size control of the self-decomposed powders, especially of the fraction below 80 μm, gives them a more prominent role in the HVOF spraying process. Furthermore, the price of self-decomposing powder is about three times lower than the price of powders of equivalent compositions, produced by gas atomization.
Therefore, after considering the potential advantages of the implementation of the self-decomposing intermetallic Fe-Al-type powders, the aim of the present research was twofold:
developing several iron aluminide HVOF coatings from a Fe40Al (at.%) and FeCr-25% + FeAl-TiAl-Al2O3 (wt.%) powders and comparing them with self-decomposed and SHS (self-propagating high-temperature synthesis)-manufactured powders of different compositions and
evaluating the performance of these coatings in Na2SO4 molten salt at 850 °C, as the ultimate solution for typical application in industrial boilers.
It is well known that the application area of the FeAl coatings depends on their extensive properties. On the basis of the comprehensive results of own research (Ref 59, 82) already conducted, a comprehensive analysis of the impact of the structure, the level of strengthening and the state of residual stress of FeAl coatings on their adhesive strength was carried out. The mechanism of residual stress generation in FeAl coating under supersonic D-gun spraying conditions was presented, with a multi-phase structure of Fe-Al coatings and changes in the Young's modulus of the FeAl coating at elevated temperatures up to 900 °C taken into account. The mechanism of structure degradation of hybrid coating systems in different load states was subjected to an analysis by means of TAT (tensile adhesion test) and a three-dimensional bending test coupled with acoustic emission recording (Ref 82). The TAT test showed that the FeAl coating sprayed directly onto a steel substrate exhibits significantly lower adhesive strength, compared to hybrid coating systems of NiCr-20 or NiAl-5 sprayed onto the steel substrate before the FeAl base coating. The average adhesive strength of individual coating systems was, respectively: FeAl/steel—23 MPa, FeAl/NiAl5/steel—31 MPa, NiAl5/steel and NiCr20/steel—33 MPa, and FeAl/NiCr20/steel system—37 MPa (Ref 82).
Because we have already considered some aspects of the mechanical performance of the Fe-Al-type coatings, it is a good reason to focus in this paper on the phase and microstructural changes as the "corrosion performance" of the coatings at high temperature in an aggressive environment.
At the same time, the "corrosion performance" that we are studying makes reference to the qualitative phase and microstructural evolution of the coatings, without a quantitative evaluation of weight changes as oxidation dynamics, which is relatively simple for bulk materials. Such an analysis is not so simple in the case of the coating-substrate system, due to the strong oxidation of substrate material at high temperature, which does not lead to reliable results in regard to the FeAl coating itself.
Therefore, in this work, we focused on the analysis of structural stability during high-temperature oxidation at 850 °C in the aggressive Na2SO4 environment of as-sprayed Fe-Al coatings, under the same HVOF process conditions with various types of alloy powders of different chemical composition.
Experimental Procedure
The nominal compositions and characteristics of the powders used in the tests are presented in Table 1. The commercial FeAl grade 3 with a near equiatomic composition, provided by Mecachrome (France), is a pre-alloyed, gas atomized and subsequently ball-milled powder (powder 1). Both powder 2 and powder 3 were produced in the Department of Materials Science of the Silesian Technical University by the self-decomposed method described in detail in Ref 57.
Iron aluminide feedstock powders characteristics
Nominal composition
Particle size, µm
Method of manufacture
Powder 1 (FeAl grade 3)
Fe-40Al-0.05Zr (at.%) + 50 ppm B +1 wt.%Y2O3
< 50
Ball milling
Powder 2
FeCr25 (wt.%) + FeAl-TiAl-Al2O3
Fe46Al-6.55Si (at.%)
Self-decomposed
FexAly
− 53 + 38
SHS multi-phases FexAly type powder
Powder 4 was also produced in the Department of Materials Science of the Silesian Technical University through the SHS technique, contained Fe-Al-type phases agreed upon as FexAly. Their complex phase composition, properties and morphology were considered with a view to possible applications as protective coatings in the power industry sector.
The substrate material was a low-alloy carbon steel G41350 UNS (AISI 4135) of chemical composition presented in Table 2, in the form of coupons with dimensions of 50 × 20 × 5 mm which were grit-blasted (Ra = 4 μm), directly before the HVOF spraying to provide mechanical bonding.
Chemical composition of substrate material
Content, wt.%
G41350 UNS (AISI 4135)
The equipment used for the spraying process was a Diamond Jet Hybrid (DJH2700) designed by SULZER METCO. The following spraying parameters were applied: H2 flow rate = 717 l min−1, oxygen flow rate = 147 l min−1, feeding rate = 20 g/min, spraying distance = 250 mm, traverse gun speed = 500 mm/s and number of layers = 9. In addition, the samples were cooled with compressed air during the spraying process. Nitrogen was used as the powder carrying and shielding gas.
Hot corrosion studies were conducted in molten salt (Na2SO4) at 850 °C for all specimens (Ref 80, 81) with dimensions of 35 × 20 × 5 mm. The samples were cut using the wire electric discharge machining technique, following the HVOF spraying. The Na2SO4 tablet (0.2 g pulp and 5 mm in diameter) was pressed under 0.4 MPa and placed on the surfaces of the coatings. First, the samples were mounted in a furnace preheated to 950 °C and annealed for 10 min in an oven to melt Na2SO4. (Melting point of the salt is close to 890 °C.) Then, the temperature was lowered to 850 °C and the samples were saturated successively for 45 h in order to evaluate the coatings behavior under hot corrosion conditions in the aggressive environment.
The microstructural characteristics of the feedstock powder, as well as initial and corroded coatings, were obtained by SEM/EDS using the Quanta 3D FEG Dual Beam and JEOL 5310 microscopes operating at 20 kV. The backscattered images were obtained with a K.E. developments detector. Coating porosity was evaluated by means of the image analysis ImageJ software. Qualitative microanalysis was performed by EDS with a RÖNTEC detector. Additionally, the roughness of the coatings was measured by confocal microscopy (Leica DCM3D).
XRD was used to characterize the phases and assess the degree of order in the feedstock powders and sprayed coatings. All x-ray measurements were carried out with the Bragg-Brentano θ/2θ Siemens D-500 diffractometer with Cu Kα radiation.
Feedstock Powder
Figure 1 shows the particle size distribution of the powders. It can be observed that the ball-milled powder 1 is characterized by the Gaussian distribution centered at a mean size of 30 µm, while powder 2 shows a non-symmetric distribution with d10 = 3 µm/d90 = 56 µm. The self-decomposed powder 3 contains a large amount of fine particles with d10 = 3 µm/d90 = 60 µm, while d10 = 7 µm/d90 = 68 µm was recorded in powder 4.
Particle size distribution of: (a) powder 1, (b) powder 2, (c) powder 3, (d) powder 4
The SEM-BSE micrographs of the cross sections show that all the powder particles exhibit irregular morphology and reveal uniform composition of powder 1, whereas the rest presents a varying degree of grayness (Fig. 2). Their compositions are presented in Table 3 for different EDX point microanalysis. Powder 2 contains a varying chemical composition with diversified content of Al, Cr and Ti in individual particles, as well as separate regions of Al2O3 (Fig. 2b). Self-decomposed powder 3 shows regions identified as SiO2 and predominant light gray areas with aluminum content significantly higher than iron (Fig. 2c). In powder 4, the distribution of the phases from one particle to the other is quite different (Fig. 2d), with some particles exhibiting a mixed laminar structure of two phases. Thus, it was determined that the SHS intermetallic powder showed a wide range of chemical compositions of the Fe-Al-based phases in single powder particles (52-73 at.%), which suggests that they were secondary solutions based on Fe-Al phases with wide range of Al content and trace amounts of Cr.
SEM images in the cross sections of: (a) powder 1 and, (b) powder 2, (c) powder 3, (d) powder 4
Semiquantitative EDS analysis (at.%) of different Fe-Al-type powders used for HVOF spraying
Designation of grain area according to Fig. 2
Content, at.%
Powder 2—Fig. 3(b)
1—Light
2—Dark gray
3—Light gray
Powder 3—Fig. 3(c)
3—Medium gray
Powder 4—Fig. 3(d)
Figure 3 shows the XRD results of the powders. Powder 1 presents typical fundamental lines of the FeAl pattern (h + k + l = even), exhibited only when the structure is disordered, as otherwise, the superlattice lines (h + k + l = odd) would also appear. The occurrence of broad peaks is related to the fine grain size and microstrains resulting from the milling.
XRD diffraction patterns of the feedstock powders at the initial state (from the manufacturer)
Based on Senderowski's results (Ref 57), the low-energy milling of the powder particles causes crystallite fragmentation, resulting in the formation of the nanocrystalline structure of the powder particles. Low-energy milling decreases the ordering degree of the FeAl secondary solution, which in turn limits the strength of the particles. Nevertheless, this is compensated with strengthening, which originates from the crystallite fragmentation.
Powder 2 contains Fe-Al and Ti-Al intermetallics, while the XRD of powder 3 confirms the presence of different intermetallic Fe-Al phases, mainly Fe2Al5 and FeAl3, together with trace amounts of SiO2. Silicon embrittles the material. A clear explanation and concise description of the self-decomposing process are presented in Ref 56, 57, where it was reported that many hypothesis can be introduced to explain the self-decomposition of the Pyroferal cast-iron casts.
The Pyroferal casts structure, which depends on the chemical composition, is made of the following intermetallic phases: Fe3Al and FeAl, or FeAl and Al4C3 aluminum carbide, trace amounts of graphite. The most common hypothesis of the self-decomposition suggests that secretions of aluminum carbide Al4C3 react with water vapor on the surface of the Fe-Al-C-Me alloys (Me = Ni, Mn, Cr, Mo, V, B, Si) and create aluminum hydroxide and methane (Ref 57):
$${\text{Al}}_{4} {\text{C}}_{3} + 12{\text{H}}_{2} {\text{O}} \to 4{\text{Al}}\left( {\text{OH}} \right)_{3} + 3{\text{CH}}_{4} \uparrow$$
The cracking and fragmenting of the castings occur under the influence of stresses caused by the product Al(OH)3, characterized by a higher specific volume than reacting Al4C3 carbide.
Powder 4 consists of strongly oxidized secondary solution on the FeAl intermetallic base with a widely varying content of aluminum and thin Al2O3 films covering the particle surface, which has a bearing on their growing importance in the production of coatings with a nanocomposite structure.
The strong diversification of chemical composition between single particles as well as within the area inside shows that the tested powder has the structure of a secondary solution based on phases from the Fe-Al equilibrium phase diagram, with a wide span of changes in Al and sparse distribution of Cr and Si. It is to be assumed that the formation of oxide films on the surface of the powder particles is most likely attributable to self-propagating high-temperature synthesis, a phenomenon strongly exothermic in its nature. The oxide formation may as well be related to the technological process consisting of crashing and high-energy mechanical milling during selective heat sintering onto a powder.
In consequence, the XRD analysis of powder 4 revealed the formation of FeAl, FeAl2, Fe2Al5 and FeAl2O4 phases under the SHS process. Relatively high half-width of the overlapping reflections of Fe-Al phases is the result of a wide span of Al content across the area containing individual powder particles (Fig. 3), which leads to a network deformation within each phase and generation of residual stress. Moreover, the latter is amplified by crushing and high-energy mechanical milling of sinters following the SHS process.
As-Sprayed Coating Microstructures
Figure 4 shows the cross sections of as-sprayed HVOF coatings with thickness of 103 ± 9; 84 ± 10; 76 ± 13; and 93 ± 11 µm, obtained through spraying nine layers, for each one of the four powders presented in Table 1. The coating obtained with the pre-alloyed powder (powder 1) is quite uniform in thickness, whereas other are less homogeneous; the values of roughness were found to be Ra = 3.6 ± 0.6; 5.1 ± 0.7; 4.3 ± 0.3; and 6.8 ± 0.4 µm, respectively. The highest porosity of 1.45 ± 0.02% corresponds to coating 4 (as-sprayed powder 4, from now on label coating X stands for as-sprayed powder X).
SEM images in cross section of the as-sprayed HVOF coatings obtained with: (a) powder 1—coating 1, (b) powder 2—coating 2, (c) powder 3—coating 3, and (d) powder 4—coating 4
The examination of the microstructure indicates that the uniform distribution of the oxidation occurs in-flight rather than after the splat impact.
The powder particles are usually melted or at least pre-melted, as a result of HVOF spraying, during which the gas mixture is being continuously combusted under high pressure (Ref 28, 33-35, 39-43, 46). As a result of the thermal activation of gaseous products in the HVOF process, the in situ formation of thin and complex oxide films on the internal splat interfaces is affected. The oxide films, identified mainly as Al203 compounds, become a specific composite reinforcement in a Fe-Al intermetallic coating (Ref 33, 34, 40-43, 45, 46). Oxides are formed during the HVOF process in the phase during which the gaseous products transport the powder particles, along with rapid chemical reactions, accompanied by the release of a great amount of thermal energy (Ref 40-43).
The presence of a lamellar structure resulting from partly melted and oxidized particles with inhomogeneous compositions (Table 4) and intersplat porosity can be observed at higher magnification (Fig. 4). The nature of coating 1 is well documented by partially and fully melted particles exhibiting different degrees of grayness at the boundaries of the intersplats (Fig. 4a). The light areas in the intersplats correspond to the Al-depleted regions, whereas the darkest ones are attributed to spinel oxides (Ref 56).
Semiquantitative EDS analysis (at.%) of as-HVOF-sprayed Fe-Al coatings from different types of powders (presented in Tab. 2)
Coating 1—Fig. 5(a)
Coating 2—Fig. 5(b)
~ 0.8
Coating 3—Fig. 5(c)
Coating 4—Fig. 5(d)
Furthermore, the XRD results confirm the findings (Fig. 5); the additional peaks, also identified as FeAl, correspond to the superlattice lines due to ordering of the intermetallic phase as a result of the thermal history of particles in the flame. Light regions around the intersplat boundaries of coating 2 in Fig. 4(b) are poorer in Al and Ti, which in fact are located next to the dark gray areas identified as oxides (Fig. 5). Coating 2 is reinforced by the incorporation of alumina, visible as intensely dark areas in the shape of circle-like figures. The SiO2 particles act as some sort of reinforcement in coating 3 (dark regions in Fig. 4c). The light gray regions in coating 3 correspond to iron-rich phases, while the darker predominant contrast reveals more balanced iron and aluminum content (Fig. 4c). Some porosity is observed; however, the extend of oxidation is significantly lower in relation to coating 1. SiO2 particles from the feedstock can be found as very dark regions, uniformly distributed within the coating. In coating 4, the lightest regions are poorer in aluminum than the medium gray ones and are identified as Fe3Al phase, whereas the medium gray contrast is mainly identified as FeAl2 and Fe2Al5 (Fig. 4d).
XRD diffraction patterns of the as-sprayed HVOF coatings (according to the legend)
The degree of melting or semi-melting of the particles in HVOF within the coating can be controlled by process variables, i.e., fuel and oxygen flow rates, spraying distance and particle size. The process variables determine particle temperature and velocity upon impact and, thus, the typical lamellar structure of thermal-sprayed coatings. Many different iron aluminide compositions have been deposited using these technologies (Ref 17, 27, 28, 33-47, 82). However, different distributions of the intermetallic phases and Fe-rich areas are usually observed after the evaluation of their structural characterization. Moreover, these areas are aluminum-depleted as a result of the thermal history of the particles in the flame.
Low oxygen-to-fuel ratio is normally preferred in order to minimize oxidation, whereas lower carrying gas flow implies slower particle velocities; a higher in-flight period promotes further oxidation (Ref 43, 70). The formation of intersplat oxides, and thus the occurrence of Al-depleted regions, may stimulate corrosion in field performance; at the same time, such oxides may also increase coating hardness and wear resistance. For example, Totemeier et al. 70 observed a decrease in the oxide content and coating porosity both in Fe3Al and in FeAl cases when the chamber pressure was increased, because it directly affects particle velocity and thus their degree of melting. However, the particle temperature for FeAl was lower than for the Fe3Al powder, probably because of the lesser thermal conductivity of FeAl. Considering those factors, Al2O3 can clearly act as a reinforcement phase in coating 2, aiding Al and Cr oxidation which leads to forming a protective layer. On the whole, it is important to point out that the resulting Al content and distribution in the as-sprayed coating also determine corrosion properties.
Some microcracks, formed perpendicularly to the layer, were observed particularly in coating 4 and less noticeably in coating 3; such microcracks are attributed to the brittleness of the intermetallic phases, which are unable to withstand the deformation upon impact at high particle velocities. The grain boundaries were not the most common areas favoring the propagation of cracks, and therefore good cohesive strength is assumed. The microcrack network for the as-sprayed SHS powder (coating 4, see arrows in inset Fig. 4d) does not exhibit a specific direction within individual splats, which confirms the correspondence between embrittlement and the occurrence of Al-rich phases, namely Fe2Al5 and FeAl2.
For the as-sprayed self-decomposed powder (coating 3, see arrows in inset Fig. 4c), the microcracks are perpendicular to the coating surface, which suggests that the cracking is also due to the influence of tensile thermal strain sustained during rapid quenching of splats. The values of the linear thermal expansion coefficient for Fe-Al-type intermetallic phases (ranging from 15 × 10−6 up to 22 × 10−6 K−1) are significantly different in comparison with these of the steel substrate (12 × 10−6 K−1) (Ref 58, 59). Some of the cracks found in the ball-milled Cr- and Ti-alloyed powder (coating 2, see arrows in inset Fig. 4b) may be additionally linked to the impact of the hot metallic particles entering cooler Al2O3 regions.
Additionally, it was previously observed for Fe40Al type coatings that equiaxed small grains were displayed in the unmelted areas, while columnar grains, typical for rapid solidification processes, were visible in the melted regions. Interestingly, as a result of the thermal history of the milled particles in the flame, the final FeAl phase appears to be the ordered B2 lattice, present in the areas that reached the molten state (Ref 34, 46). Taking into consideration a higher melting point of the FeAl stoichiometric compound in relation to Fe2Al5 and FeAl2 (1250, 1171 and 1157 °C, respectively), and the high particle heterogeneity of powders 3 and 4, there is a great likelihood that these phases melt during the formation of amorphous oxide (AO) (Ref 29, 46, 51-56). This results in the multi-phase (composite-like) structure of the Fe-Al coatings (Ref 29, 56).
Corrosion Performance
Degradation and infiltration of Na and S elements within the coatings following exposure to Na2SO4 at 850 °C are examined in Fig. 6 to 9. The cross section of coating 1 (Fig. 6a, b) does not show significant damage compared to Fig. 4(a); the coating preserves its original thickness all along the tested sample. No infiltration of the salt can be observed within the splat boundaries (Fig. 6c-g). The light contrast (Fig. 6b) is poorer in aluminum than the as-sprayed state (spot 1-coating 1 Table 5), while the intersplat dark contrast is richer in oxygen.
Typical lamellar-like microstructure in the cross sections of the as-sprayed HVOF coating and, after molten salt corrosion—obtained with powder 1 (a, b) and SEM/EDX results with corresponding EDX maps of Fe (c), Al (d), O (e), Na (f) and S (g) distributions
Semiquantitative EDS analysis (at.%) of the HVOF-sprayed Fe-Al coatings after the molten salt corrosion
Designation of grain area
Content [at.%]
~ 9
~ 39
4, 5—Dark gray
Coating 4—Fig. 10(a) and (b)
A similar case is observed in the as-sprayed powders 2 (Fig. 7) and 3 (Fig. 8) where the oxygen diffusion is detected even within the splats. Following the tests, the non-oxidized phase in the as-sprayed coating 2 (spot 3-coating 2 Table 4), which is nearly equal in Fe and Al content, becomes oxidized and enriched in chromium at the expense of depleted rate of titanium (spot 3-coating 2 Table 5). By contrast, the dark gray phase doubles its Al content while O content is reduced (spot 2-coating 2 Table 4 compared to spot 4-coating 2 Table 5). The silicon in coating 3 appears to diffuse the core of the splats. Also, some oxide microareas are detected at the coating-substrate interface and have been identified as aluminum oxide (Fig. 8b). No significant amounts of sodium or sulfur were identified within the EDS maps (Fig. 7c-h, 8c-h).
Typical lamellar-like microstructure in the cross sections of the as-sprayed HVOF coating and, after molten salt corrosion—obtained with powder 2 (a, b) and SEM/EDX results with corresponding EDX maps of Fe (c), Al (d), Ti (e), Cr (f), O (g), Na (h) and S (i) distributions
Typical lamellar-like microstructure in the cross sections of the as-sprayed HVOF coating and, after molten salt corrosion—obtained with powder 3 (a, b) and SEM/EDX results with corresponding EDX maps of Fe (c), Al (d), O (e), Si (f), Na (g) and S (h) distributions
Coating 4 suffered the highest damage as splat shapes are no longer visible and the deposit consists of a composite containing Al-rich oxide network with a Fe-rich matrix (Fig. 9a, b). Such a structure is visible to progress uniformly from the air-coating interface (area 1-coating 4 Table 5); it displays higher oxygen content than the rest of the coating (area 2-coating 4 Table 5). For that coating, regions near to the edges of the sample were severely damaged with considerable degradation observed; in these cases, Na and S concentrations escalated in proportion to visible infiltration.
Top surface oxide morphologies in Fig. 10 are contrasting with more granular shapes discovered in coatings 1, 3 and 4, whereas coating 2 is more needle-shaped. Coating 1 was covered by iron oxide even when exposed to oxidation atmosphere (Ref 83). The needles in coating 2 were identified as mixed Fe and Ti oxides, with an oxide layer below, also rich in Al and Cr (Fig. 7). Coating 3 was mostly covered by alumina layer, while coating 4 was a mixed Fe and Al oxide; the scale fluxing may involve an interactive reaction between the basic dissolution of Al2O3 and the acidic dissolution of Fe2O3.
SEM cross-section micrographs of the oxide layer on the coatings surface obtained from: (a) powder 1, (b) powder 2, (c) powder 3 and (d) powder 4
The XRD of the corroded coatings (Fig. 11a-d) shows that coating 1 is covered with two oxides, namely Fe2O3 and Al2O3. The results obtained from the EDS analysis (Fig. 6) confirm the depletion of Al in the Fe-Al phase. The rapid growth of iron oxide was not observed in the rest of the coatings, yet alumina was identified; the alumina-identified pattern phase is mainly α-Al2O3 corundum; actually, it has been reported that the predominant surface product that forms between 600 and 800 °C is α-Al2O3(rhombohedral), together with γ-Al2O3(cubic) and θ-Al2O3(monoclinic) (Ref 84). The latter two phases are fast growing, more voluminous, more porous and less protective than α-Al2O3; the heterogeneous growing of α-Al2O3, also with some traces of γ and θ phases, could also explain why the other coatings showed significant damage. According to the literature, the sequence is believed to be as follows: γ-Al2O3 → δ-Al2O3 (750 °C); δ-Al2O3 → θ-Al2O3 (900 °C); θ-Al2O3 → α-Al2O3 (1000 °C), and the precise temperature transformation from θ to α is influenced by the presence of reactive elements (Ref 85).
XRD diffraction patterns of the HVOF coatings after corrosion in molten Na2SO4: (a) coating 1, (b) coating 2, (c) coating 3 and (d) coating 4
The formation of alumina consumes a certain quantity of Al, reducing its activity and partial pressure of the oxygen. This caused a relative increase in the activities of the Fe and S and serves as a catalyst for the reaction with the molten mixture to obtain a compound such as FeS.
$$\begin{aligned} & 2{\text{FeAl}} + {\text{SO}}_{3} \to {\text{Al}}_{2} {\text{O}}_{3} + {\text{S}} + 2{\text{Fe}} \\ & {\text{S}} + {\text{Fe}} \to {\text{FeS}} \\ \end{aligned}$$
Sulfur attack and penetration appear to be more visible in the edges of coating 4 (not presented here). Under the molten salt corrosion conditions, the dissolution of the component below can be produced by local dissolution or selective dissolution of different components of the oxide (Ref 86). Selective oxidation and dissolution of iron in coating 4 resulted in a loss of the coating integrity, leading to a high corrosion rate. In this case, sulfur may have moved from the oxide/molten salt interface toward the coating/substrate interface by diffusion or infiltration of the melt through the structural defects of the oxide scale. It proceeded through particle boundaries as well as microcrack networks until the moment it reached the steel substrate in some parts of the coating. It can be suspected that this local corrosion mechanism may have triggered the damage, causing metal dissolution in hot points. The decomposition of Na2SO4 would result in SO3 formation, which might have been the aggressive agent for the rapid preferential attack at coating defects (Ref 87). Sodium presence within the coating might follow the basic dissolution reaction at the oxide/molten salt interface: Al2O3 + Na2O → 2NaAlO2 (Ref 19).
Corrosion in the rest of the coatings appears to have been produced by uniform oxidation at the coating/molten salt/air interface. The formation of the fast growing oxides indicates that the coating might be diluted upon longer exposure times, apparently without preference for any of the coating components. At 900 °C, the Fe40Al composition for bulk materials was found to be more resistant than Fe40Al-0.1B-10Al2O3 (at.%) (Ref 88). Apparently, a similar phenomenon applies to coating 2, but the scale is much more complex, especially in contrast to coating 3. Under the oxide scale, Al depletion was observed in the intermetallic phase. Less defective structure of the as-sprayed coatings and the favorable presence of other stoichiometric intermetallic phases may be the reason why their corrosion rates were lower than the ones observed in the as-sprayed SHS powder.
The results of experiments and subsequent analyses allowed an evaluation of hot corrosion performance of HVOF-sprayed coatings with Fe-Al intermetallic matrix in molten Na2SO4 at 850 °C in an isothermal process in the span of 45 h under static conditions.
It was determined that under applied HVOF spraying conditions, Fe-Al powder particles form a stratified/laminar/pseudo-composite structure of the coating, in which the thickness varies in dependence of the Fe-Al powder composition after nine passes of the HVOF gun. At the same time, high plastic deformation of FeAl grains in the volume of the coating, obtained from the powder particles of different chemical composition with the involvement of alloying elements, proves the plastic deformability of a highly brittle Fe-Al phase upon impact with the substrate material. However, significant changes to the percentage shares of iron and aluminum in the structure of the as-sprayed coatings, involving the oxide phases formed in situ during the HVOF process, indicate melting or pre-melting of the powder particles, coupled with intensive oxidation due to reaction with the highly reactive hydroxyl radicals (OH). Rapid plastic transformation of intermetallic powder particles, combined with their "freezing" in contact with the "cold" substrate, leads to the amorphization of oxide ceramics. The oxides are shaped in the form of flattened, nanometric thin films at the boundaries of the splats, within a fine-dispersed, heterogenous structure of the Fe-Al coating. Selective depletion of aluminum, diffusing into oxide phases, has no influence on the behavior of the FeAl (B2) superstructure, obtained from the pre-milled powder FeCr25 + FeAl-TiAl-Al2O3 sprayed under applied HVOF conditions. Hard oxide phases, in the form of thin films at the grain boundaries and dispersions in the grain volume, influence the strengthening of the structure, mainly by limiting the dislocation motion and migration of grain boundaries. Consequently, this reduces the susceptibility to plastic deformation of FeAl grains and recrystallization of the intermetallic alloy. Participation of the phases rich in aluminum, namely Fe2Al5 and FeAl2, as well as oxide phases, leads to the formation of microcracks. As a result, this is conducive to the diffusion of toxic ingredients in aggressive environment of Na2SO4 molten salt under the conditions of high-temperature oxidation at 850 °C in the span of 45 h.
Generally, among the multi-phase corrosion products formed on the surface of the FeAl (HVOF) coatings at the temperature of 850 °C, the dominant oxide is α-Al2O3, alongside other oxides (i.e., Fe2O3). The aluminum in the Fe-Al coatings is selectively oxidized and forms a stable α-Al2O3 oxide on the surface of the coatings. However, it is then subject to degradation as a result of several structural defects and different thermal expansion coefficients, as compared to the Fe-Al-type phases, especially in the case of the FexAly coating.
The research leading to these results has received funding from the People Programme (Accions Marie Curie) of the 7 Framework Programme of the European Union (FP7/2007-2013) under REA Grant Agreement No. 600388 (TECNIO spring programme), and from the Agency for Business Competitiveness of the Government of Catalonia, ACCIÓ. The authors wish to thank Dr. D. Zasada and M.Sc. Eng. D. Marczak from the Department of Advanced Materials and Technologies, Military University of Technology, for his help in the experimental work as well as Prof. L. Swadźba for enabling the study of hot corrosion.
S. Swaminathan, S.-M. Hong, M. Kumar, W.-S. Jung, D.-I. Kim, H. Singh, and I.-S. Choi, Microstructural Evolution and High Temperature Oxidation Characteristics of Cold Sprayed Ni-20Cr Nanostructured Alloy Coating, Surf. Coat. Technol., 2019, 362, p 333-344CrossRefGoogle Scholar
H. Singh, M. Kaur, and S. Prakash, High-Temperature Exposure Studies of HVOF-Sprayed Cr3C2-25(NiCr)/(WC-Co) Coating, J. Therm. Spray Technol., 2016, 26(6), p 1192-1207CrossRefGoogle Scholar
N. Kaur, M. Kumar, S.K. Sharma, D. Young Kim, S. Kumar, N.M. Chavan, S.V. Joshi, N. Singh, and H. Singh, Study of Mechanical Properties and High Temperature Oxidation Behavior of a Novel Cold-Spray Ni-20Cr Coating on Boiler Steels, Appl. Surf. Sci., 2015, 328, p 13-25CrossRefGoogle Scholar
H. Singh, D. Puri, and S. Prakash, An Overview of Na2SO4 and/or V2O5 Induced Hot Corrosion of Fe- and Ni-Based Superalloys, Rev. Adv. Mater. Sci., 2007, 16(1-2), p 27-50Google Scholar
P. Audigié, V. Encinas-Sánchez, M. Juez-Lorenzo, S. Rodríguezo, M. Gutiérrez, F.J. Pérez, and A. Agüero, High Temperature Molten Salt Corrosion Behavior of Aluminide and Nickel-Aluminide Coatings for Heat Storage in Concentrated Solar Power Plants, Surf. Coat. Technol., 2018, 349, p 1148-1157CrossRefGoogle Scholar
T.L. Talako, M.S. Yakovleva, E.A. Astakhov, and A.I. Letsko, Structure and Properties of Detonation Gun Sprayed Coatings from the Synthesized FeAlSi/Al2O3 Powder, Surf. Coat. Technol., 2018, 353, p 93-104CrossRefGoogle Scholar
H.S. Grewal, S. Bhandari, and H. Singh, Parametric Study of Slurry-Erosion of Hydroturbine Steels with and Without Detonation Gun Spray Coatings Using Taguchi Technique, Metall. Mater. Trans. A, 2012, 43A, p 3387-3401CrossRefGoogle Scholar
R.L. Fleischer, D.M. Dimiduk, and H.A. Lipsitt, Intermetallic Compounds for Strong High-Temperature Materials: Status and Potential, Annu. Rev. Mater. Sci., 1989, 19, p 231-253CrossRefGoogle Scholar
S.C. Deevi, V.K. Sikka, and C.T. Liu, Processing, Properties and Applications of Nickel and Iron Aluminides, Prog. Mater Sci., 1997, 42, p 177-192CrossRefGoogle Scholar
Y. Shi and D.B. Lee, Corrosion of Fe3Al-4Cr Alloys at 1000 C in N2-0.1%H2S Gas, Key Eng. Mater., 2018, 765, p 173-177CrossRefGoogle Scholar
C. Shen, K.-D. Liss, Z. Pan, Z. Wang, X. Li, and H. Li, Thermal Cycling of Fe3Al Based Iron Aluminide During the Wire-Arc Additive Manufacturing Process: An in Situ Neutron Diffraction Study, Intermetallics, 2018, 92, p 101-107CrossRefGoogle Scholar
W. Liu, Y. Wang, H. Ge, L. Li, Y. Ding, L. Meng, and X. Zhang, Microstructure Evolution and Corrosion Behavior of Fe-Al-Based Intermetallic Aluminide Coatings Under Acidic Condition, Trans. Nonferrous Met. Soc. China, 2018, 28, p 2028-2043CrossRefGoogle Scholar
S.C. Deevi and V.K. Sikka, Nickel and Iron Aluminides: An Overview on Properties, Processing, and Applications, Intermetallics, 1996, 4, p 357-375CrossRefGoogle Scholar
D.G. Morris and M.A. Muñoz-Morris, Intermetallics: Past, Present and Future, Rev. Metal., 2005, 41, p 498-501CrossRefGoogle Scholar
A. Lasalmonie, Intermetallics: Why is it So Difficult to Introduce Them in Gas Turbine Engines?, Intermetallics, 2006, 14, p 1123-1129CrossRefGoogle Scholar
V.K. Sikka, Intermetallic-Based High-Temperature Materials, ORNL/CP-101117 (1999), 23 ppGoogle Scholar
C. Xiao and W. Chen, Sulfidation Resistance of CeO2 Modified HVOF Sprayed FeAl Coatings at 700°C, Surf. Coat. Technol., 2006, 201, p 3625-3632CrossRefGoogle Scholar
O.L. Arenas, J. Porcayo-Calderon, V.M. Salinas-Bravo, A. Martinez-Villafane, and J.G. Gonzalez-Rodriguez, Effect of Boron on the Hot Corrosion Resistance of Sprayed Fe40Al Intermetallics, High Temp. Mater. Proc., 2005, 242, p 93-100Google Scholar
M.A. Espinosa, G. Carbajal De la Torre, J. Porcayo-Calderon, A. Martinez-Villafañe, J.G. Chacon-Nava, M. Casales, and J.G. Gonzalez-Rodriguez, Corrosion of Atomized Fe40Al Based Intermetallics in Molten Na2SO4, Mater. Corrosion, 2003, 54, p 304-310CrossRefGoogle Scholar
J.G. Gonzalez-Rodriguez, Μ. Salazar Luna-Ramirez, J. Porcayo-Calderon, G. Rosas, and A. Martinez-Villfane, Effect of Li, Ce and Ni on the Corrosion Resistance of Fe3Al in Molten Na2So4 and NaVO3, High Temp. Mater. Proc., 2004, 233, p 17-183Google Scholar
M. Amaya, M.A. Espinosa-Medina, J. Porcayo-Calderon, L. Martinez, and J.G. Gonzalez-Rodriguez, High Temperature Corrosion Performance of FeAl Intermetallic Alloys in Molten Salts, Mater. Sci. Eng. A, 2003, 349, p 12-19CrossRefGoogle Scholar
L. Martinez, M. Amaya, J. Porcayo-Calderon, and E.J. Lavernia, High-Temperature Electrochemical Testing of Spray Atomized and Deposited Iron Aluminides Alloyed with Boron and Reinforced with Alumina Particulate, Mater. Sci. Eng. A, 1998, 258, p 306-312CrossRefGoogle Scholar
J.G. Gonzalez-Rodrıguez, A. Luna-Ramirez, M. Salazar, J. Porcayo-Calderon, G. Rosas, and A. Martinez-Villafane, Molten Salt Corrosion Resistance of FeAl Alloy with Additions of Li, Ce and Ni, Mater. Sci. Eng. A, 2005, 399, p 344-350CrossRefGoogle Scholar
M.A. Espinosa-Medina, G. Carbajal-De la Torre, H.B. Liu, A. Martínez-Villafane, and J.G. González-Rodriguez, Hot Corrosion Behaviour of Fe-Al Based Intermetallic in Molten NaVO3 Salt, Corros. Sci., 2009, 51, p 1420-1427CrossRefGoogle Scholar
D.K. Goyal, H. Singh, H. Kumar, and V. Sahni, Slurry Erosive Wear Evaluation of HVOF-Spray Cr2O3 Coating on Some Turbine Steels, J. Therm. Spray Technol., 2012, 21(5), p 838-851CrossRefGoogle Scholar
C. Senderowski and Z. Bojar, Gas Detonation Spray Forming of Fe-Al Coatings in the Presence of Interlayer, Surf. Coat. Technol., 2008, 202, p 3538-3548CrossRefGoogle Scholar
J.M. Guilemany, N. Cinca, S. Dosta, and C.R.C. Lima, High-Temperature Oxidation of Fe40Al Coatings Obtained by HVOF Thermal Spray, Intermetallics, 2007, 15, p 1384-1394CrossRefGoogle Scholar
N. Cinca and J.M. Guilemany, Thermal Spraying of Transition Metal Aluminides: An Overview, Intermetallics, 2012, 24, p 60-72CrossRefGoogle Scholar
C. Senderowski, Z. Bojar, W. Wołczyński, and A. Pawłowski, Microstructure Characterization of D-Gun Sprayed Fe-Al Intermetallic Coatings, Intermetallics, 2010, 18, p 1405-1409CrossRefGoogle Scholar
C. Senderowski, M. Chodala, and Z. Bojar, Corrosion Behavior of Detonation Gun Sprayed Fe-Al Type Intermetallic Coating, Materials, 2015, 8, p 1108-1123CrossRefGoogle Scholar
Y. Tsunekawa, M. Okumiya, K. Gotoh, T. Nakamura, and I. Niimi, Synthesis of Iron Aluminide Matrix In Situ Composites from Elemental Powders by Reactive Low Pressure Plasma Spraying, Mater. Sci. Eng. A, 1992, 159(2), p 253-259CrossRefGoogle Scholar
S. Wei, B. Xu, H. Wang, G. Jin, and H. Lv, Comparison on Corrosion-Resistance Performance of Electro-Thermal Explosion Plasma Spraying FeAl-Based Coatings, Surf. Coat. Technol., 2007, 201(9-11), p 5294-5297CrossRefGoogle Scholar
T. Grosdidier, A. Tidu, and H.-L. Liao, Nanocrystalline Fe-40Al Coating Processed by Thermal Spraying of Milled Powder, Scripta Mater., 2001, 44(3), p 387-393CrossRefGoogle Scholar
G. Ji, J. Morniroli, and T. Grosidider, Nanostructures in Thermal Spray Coatings, Scripta Mater., 2003, 48, p 1599-1604CrossRefGoogle Scholar
G. Ji, O. Elkedim, and T. Grosdidier, Deposition and Corrosion Resistance of HVOF Sprayed Nanocrystalline Iron Aluminide Coatings, Surf. Coat. Technol., 2005, 190(2), p 406-416CrossRefGoogle Scholar
B. Szczucka-Lasota, B. Formanek, and A. Hernas, Growth of Corrosion Products on Thermally Sprayed Coatings with Intermetallic Phases in Aggressive Environments, J. Mater. Process. Technol., 2005, 164-165, p 930-934CrossRefGoogle Scholar
B. Szczucka-Lasota, B. Formanek, A. Hernas, and K. Szymański, Oxidation Models of the Growth of Corrosion Products on the Intermetallic Coatings Strengthened by a Fine Dispersive Al2O3, J. Mater. Process. Technol., 2005, 164-165, p 935-939CrossRefGoogle Scholar
Y. Wang and M. Yan, The effect of CeO2 on the Erosion and Abrasive Wear of Thermal Sprayed FeAl Intermetallic Alloy Coatings, Wear, 2006, 261(11-12), p 1201-1207CrossRefGoogle Scholar
T. Grosdidier, G. Ji, F. Bernard, E. Gaffet, Z.A. Munir, and S. Launois, Synthesis of Bulk FeAl Nanostructured Materials by HVOF Spray Forming and Spark Plasma Sintering, Intermetallics, 2006, 14, p 1208-1213CrossRefGoogle Scholar
G. Ji, T. Grosdidier, and J.-P. Morniroli, Microstructure of a High-Velocity Oxy-Fuel Thermal-Sprayed Nanostructured Coating Obtained from Milled Powder, Metall. Mater. Trans. A, 2007, 38(10), p 2455-2463CrossRefGoogle Scholar
G. Ji, T. Grosdidier, N. Bozzolo, and S. Launois, The Mechanisms of Microstructure Formation in a Nanostructured Oxide Dispersion Strengthened FeAl Alloy Obtained by Spark Plasma Sintering, Intermetallics, 2007, 15, p 108-118CrossRefGoogle Scholar
G. Ji, T. Grosdidier, F. Bernard, S. Paris, E. Gaffet, and S. Launois, Bulk FeAl Nanostructured Materials Obtained by Spray Forming and Spark Plasma Sintering, J. Alloy. Compd., 2007, 434-435, p 358-361CrossRefGoogle Scholar
J.M. Guilemany, C.R.C. Lima, N. Cinca, and J.R. Miguel, Studies of Fe-40Al Coatings Obtained by High Velocity Oxy-Fuel, Surf. Coat. Technol., 2006, 201, p 2072-2079CrossRefGoogle Scholar
J.M. Guilemany, N. Cinca, J. Fernández, and S. Sampath, Erosion, Abrasive, and Friction Wear Behavior of Iron Aluminide Coatings Sprayed by HVOF, J. Therm. Spray Technol., 2008, 17(5-6), p 762-773CrossRefGoogle Scholar
J.M. Guilemany, N. Cinca, S. Dosta, and I.G. Cano, FeAl and NbAl3 Intermetallic-HVOF Coatings: Structure and Properties, J. Therm. Spray Technol., 2009, 18(4), p 536-545CrossRefGoogle Scholar
N. Cinca, S. Dosta, and J.M. Guilemany, Nanoscale Characterization of FeAl-HVOF Coatings, Surf. Coat. Technol., 2010, 205, p 967-973CrossRefGoogle Scholar
J. Xiang, X. Zhu, G. Chen, Z. Duan, Y. Lin, and Y. Liu, Oxidation Behavior of Fe40Al-xWC Composite Coatings Obtained by High-Velocity Oxygen Fuel Thermal Spray, Trans. Nonferrous Met. Soc. China, 2009, 19, p 1545-1550CrossRefGoogle Scholar
L. Singh, V. Chawla, and J.S. Grewal, A Review of Detonation Gun Sprayed Coatings, J. Miner. Mater. Charact. Eng., 2012, 11(3), p 243-265Google Scholar
C. Senderowski, Z. Bojar, W. Wołczyński, G. Roy, and T. Czujko, Residual Stresses Determined by the Modified Sachs Method Within a Gas Detonation Sprayed Coatings of the Fe-Al Intermetallic, Arch. Metall. Mater., 2007, 52(4), p 569-578Google Scholar
C. Senderowski and Z. Bojar, Influence of Detonation Gun Spraying Conditions on the Quality of Fe-Al Intermetallic Protective Coatings in the Presence of NiAl and NiCr Interlayers, J. Therm. Spray Technol., 2009, 18(3), p 435-447CrossRefGoogle Scholar
A. Pawłowski, T. Czeppe, Ł. Major, and C. Senderowski, Structure Morphology of Fe-Al Coating Detonation Sprayed Onto Carbon Steel Substrate, Arch. Metall. Mater., 2009, 54(3), p 783-788Google Scholar
W. Wołczyński, C. Senderowski, J. Morgiel, and G. Garzeł, D-Gun Sprayed Fe-Al Single Particle Solidification, Arch. Metall. Mater., 2014, 59(1), p 209-217Google Scholar
C. Senderowski, A. Pawłowski, Z. Bojar, W. Wołczyński, M. Faryna, J. Morgiel, and Ł. Major, TEM Microstructure of Fe-Al Coatings Detonation Sprayed Onto Steel Substrate, Arch. Metall. Mater., 2010, 55(2), p 373-381Google Scholar
A. Pawłowski, C. Senderowski, Z. Bojar, and M. Faryna, Detonation Deposited Fe-Al Coatings, Part I: The Interlayers Ni(Al) and Ni(Cr) and Fe-Al Coating Detonation Sprayed onto Substrate of 045 Steel, Arch. Metall. Mater., 2010, 55(4), p 1061-1071Google Scholar
A. Pawłowski, C. Senderowski, W. Wołczyński, and J. Morgiel, Detonation Deposited Fe-Al Coatings, Part II: Transmission Electron Microscopy of Interlayers and Fe-Al Intermetallic Coating Detonation Sprayed onto the 045 Steel Substrate, Arch. Metall. Mater., 2011, 56(1), p 71-79CrossRefGoogle Scholar
C. Senderowski, Nanocomposite Fe-Al Intermetallic Coating Obtained by Gas Detonation Spraying of Milled Self-Decomposing Powder, J. Therm. Spray Technol., 2014, 237, p 1124-1134CrossRefGoogle Scholar
C. Senderowski, D. Zasada, T. Durejko, and Z. Bojar, Characterization of As-Synthesized and Mechanically Milled Fe-Al Powders Produced by the Self-Disintegration Method, Powder Technol., 2014, 263, p 96-103CrossRefGoogle Scholar
B. Fikus, C. Senderowski, and A. Panas, Modeling of Dynamics and Thermal History of Fe40Al Intermetallic Powder Particles Under Gas Detonation Spraying Using Propane-Air Mixture, J. Therm. Spray Technol., 2019, 28, p 346-358CrossRefGoogle Scholar
A.J. Panas, C. Senderowski, and B. Fikus, Thermophysical Properties of Multiphase Fe-Al Intermetallic-Oxide Ceramic Coatings Deposited by Gas Detonation Spraying, Thermochim. Acta, 2019, 676, p 164-171CrossRefGoogle Scholar
B. Xu, Z. Zhu, S. Ma, W. Zhang, and W. Liu, Sliding Wear Behavior of Fe-Al and Fe-Al/WC Coatings Prepared by High Velocity Arc Spraying, Wear, 2004, 257, p 1089-1095CrossRefGoogle Scholar
T.C. Totemeier, R.N. Wright, Coating-microstructure-property-performance issues, in 19th Annual Conference on Fossil Energy Materials, INL/CON-05-00416, Preprint (2005), 9 ppGoogle Scholar
A. Magnee, E. Offergeld, M. Leroy, A. Lefort, Fe-Al intermetallic coating applications to thermal energy conversion advanced systems, in Proceedings of the 15th Thermal Spray Conference, Nice (France), vol. 2 (1998), pp. 1091-1096Google Scholar
B.S. Sidhu and S. Prakash, Evaluation of the Corrosion Behaviour of Plasma-Sprayed Ni3Al Coatings on Steel in Oxidation and Molten Salt Environments at 900°C, Surf. Coat. Technol., 2003, 166, p 89-100CrossRefGoogle Scholar
G.D. Girolamo, C. Blasi, M. Schioppa, and L. Tapfer, Structure and Thermal Properties of Heat Treated Plasma Sprayed Ceria-Yttria Co-stabilized Zirconia Coatings, Ceram. Int., 2010, 36, p 961-968CrossRefGoogle Scholar
R.L. Jones, Some Aspects of the Hot Corrosion of Thermal Barrier Coatings, J. Therm. Spray Technol., 1997, 61, p 77-84CrossRefGoogle Scholar
X. Chen, Y. Zhao, L. Gu, B. Zou, Y. Wang, and X. Cao, Hot Corrosion Behavior of Plasma Sprayed YSZ/LaMgAl11O19 Composite Coatings in Molten Sulfate-Vanadate Salt, Corros. Sci., 2011, 53, p 2335-2343CrossRefGoogle Scholar
R. Ahmadi-Pidani, R. Shoja-Razavi, R. Mozafarinia, and H. Jamali, Evaluation of Hot Corrosion Behavior of Plasma Sprayed Ceria and Yttria Stabilized Zirconia Thermal Barrier Coatings in the Presence of Na2SO4-V2O5 Molten Salt, Ceram. Int., 2012, 38, p 6613-6620CrossRefGoogle Scholar
X.H. Zhong, Y.M. Wang, Z.H. Xu, Y.F. Zhang, J.F. Zhang, and X.Q. Cao, Hot-Corrosion Behaviors of Overlay-Clad Yttria-Stabilized Zirconia Coatings in Contact with Vanadate-Sulfate Salts, J. Eur. Ceram. Soc., 2010, 30, p 1401-1408CrossRefGoogle Scholar
T.S. Sidhu, R.D. Agrawal, and S. Prakash, Hot Corrosion of Some Superalloys and Role of High-Velocity Oxy-Fuel Spray Coatings—A Review, Surf. Coat. Technol., 2005, 198, p 441-446CrossRefGoogle Scholar
T.C. Totemeier, R.N. Wright, and W.D. Swank, Microstructure and Stresses in HVOF Sprayed Iron Aluminide Coatings, J. Therm. Spray Technol., 2002, 113, p 400-408CrossRefGoogle Scholar
T.C. Totemeier, R.N. Wright, and W.D. Swank, FeAl and Mo-Si-B Intermetallic Coatings Prepared by Thermal Spraying, Intermetallics, 2004, 12, p 1335-1344CrossRefGoogle Scholar
G. Ji, J.P. Morniroli, A. Tidu, C. Coddet, and T. Grosdidier, Surface Engineering by Thermal Spraying Nanocrystalline Coatings: X-Ray and TEM Characterisation of As-Deposited Iron Aluminide Structure, J. Phys. IV France, 2002, 12(6), p 509-518CrossRefGoogle Scholar
G. Ji, T. Grosdidier, H.L. Liao, J.-P. Morniroli, and C. Coddet, Spray Forming Thick Nanostructured and Microstructured FeAl Deposits, Intermetallics, 2005, 13, p 596-607CrossRefGoogle Scholar
T. Grosdidier, G. Ji, and N. Bozzolo, Hardness, Thermal Stability and Yttrium Distribution in Nanostructured Deposits Obtained by Thermal Spraying from Milled—Y2O3 Reinforced—or Atomized FeAl Powders, Intermetallics, 2006, 14(7), p 715-721CrossRefGoogle Scholar
M.A. Uusitalo, P.M.J. Vuoristo, and T.A. Mantyla, High Temperature Corrosion of Coatings and Boiler Steels in Reducing Chlorine-Containing Atmosphere, Surf. Coat. Technol., 2002, 161, p 275-285CrossRefGoogle Scholar
S. Kamal, R. Jayaganthan, S. Prakash, and S. Kumar, Hot Corrosion Behavior of Detonation Gun Sprayed Cr3C2-NiCr Coatings on Ni and Fe-Based Superalloys in Na2SO4-60% V2O5 Environment at 900 °C, J. Alloys Compd., 2008, 463, p 358-372CrossRefGoogle Scholar
A.Y. Mosbah, D. Wexler, and A. Calka, Abrasive Wear of WC-FeAl Composites, Wear, 2005, 258, p 1337-1341CrossRefGoogle Scholar
M. Ahmadian, D. Wexler, T. Chandra, and A. Calka, Abrasive Wear of WCeFeAl-B and WCeNi3Al-B Composites, Int. J. Refract. Met. Hard Mater., 2005, 23, p 155-159CrossRefGoogle Scholar
B.-H. Tian, P. Liu, B.-S. Xu, S.-N. Ma, W. Zhang, and S.-Z. Li, Tribological Properties of Thermal Spray Formed Fe3Al-Based Coatings at Elevated Temperature, Chin. J. Nonferrous Met., 2003, 13, p 978-982Google Scholar
M. Sozańska, B. Kościelniak, and L. Swadźba, Evaluation of Hot Corrosion Resistance of Directionally Solidified Nickel-Based Superalloy, Solid State Phenom., 2015, 227, p 337-340CrossRefGoogle Scholar
K. Katiki, S. Yadlapati, S.N.S. Chidepudi, and N. Arivazhagan, Performance of Plasma Spray Coatings on Inconel 625 in Air Oxidation and Molten Salt Environment at 800°C, Int. J. Chem. Teach. Res., 2014, 65, p 2744-2749Google Scholar
C. Senderowski, Iron-Aluminium Intermetallic Coatings Synthesized by Supersonic Stream Metallization, Copyright by BEL Studio Sp. Z o.o., Warszawa—2015 (2015). ISBN: 978-83-7798-227-3. 280 pp (in polish) Google Scholar
J.M. Guilemany, N. Cinca, S. Dosta, Oxidation Behavior of HVOF-Sprayed ODS-Fe40Al Coatings at 900°C, in Proceedings of Thermal Spray (Global Coating Solutions, 2007)Google Scholar
K. Natesan, Corrosion Performance of Iron Aluminides in Mixed-Oxidant Environments, Mater. Sci. Eng., 1998, 2581-2, p 126-134CrossRefGoogle Scholar
A. Mignone, S. Frangini, A. La Barbera, and O. Tassa, High Temperature Corrosion of B2 Iron Aluminides, Corros. Sci., 1998, 408, p 1331-1347CrossRefGoogle Scholar
Metals Handbook, High Temperature Corrosion in Molten Salts, Vol 13, 9th ed., ASM International, Russell Township, 1987, p 50-55Google Scholar
J.C. Hallet and K.H. Stern, Vaporization and Decomposition of Na2SO4. Thermodynamics and Kinetics, J. Phys. Chem., 1980, 84, p 1699-1704CrossRefGoogle Scholar
M. Amaya, M.A. Espinosa-Medina, J. Porcayo-Calderon, L. Martinex, and J.G. Gonzalex-Rodriguez, High Temperature Corrosion Performance of FeAl Intermetallic Alloys in Molten Salts, Mater. Sic. Eng. A, 2003, 349, p 12-19CrossRefGoogle Scholar
© The Author(s) 2019
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Department of Materials Technology and MachineryUniversity of Warmia and MazuryOlsztynPoland
2.Dpt. Ciència dels Materials i Enginyeria Metallúrgica, Centre de Projecció Tèrmica (CPT)Universitat de BarcelonaBarcelonaSpain
Senderowski, C., Cinca, N., Dosta, S. et al. J Therm Spray Tech (2019). https://doi.org/10.1007/s11666-019-00886-w
Revised 25 May 2019
First Online 08 July 2019
DOI https://doi.org/10.1007/s11666-019-00886-w | CommonCrawl |
List of things named after Élie Cartan
These are things named after Élie Cartan (9 April 1869 – 6 May 1951), a French mathematician.
Mathematics and physics
• Cartan calculus
• Cartan connection, Cartan connection applications
• Cartan's criterion
• Cartan decomposition
• Cartan's equivalence method
• Cartan formalism (physics)
• Cartan involution
• Cartan's magic formula
• Cartan relations
• Cartan map
• Cartan matrix
• Cartan pair
• Cartan subalgebra
• Cartan subgroup
• Cartan's method of moving frames
• Cartan's theorem, a name for the closed-subgroup theorem
• Cartan's theorem, a name for the theorem on highest weights
• Cartan's theorem, a name for Lie's third theorem
• Einstein–Cartan theory
• Einstein–Cartan–Evans theory
• Cartan–Ambrose–Hicks theorem
• Cartan–Brauer–Hua theorem
• Cartan–Dieudonné theorem
• Cartan–Hadamard manifold
• Cartan–Hadamard theorem
• Cartan–Iwahori decomposition
• Cartan-Iwasawa-Malcev theorem
• Cartan–Kähler theorem
• Cartan–Karlhede algorithm
• Cartan–Weyl theory
• Cartan–Weyl basis
• Cartan–Killing form
• Cartan–Kuranishi prolongation theorem
• CAT(k) space
• Maurer–Cartan form
• Newton–Cartan theory
• Stokes–Cartan's theorem, the generalized fundamental theorem of calculus, proven by Cartan (in its general form), also known as Stokes' theorem although Stokes neither formulated nor proved it.
Other
• Cartan (crater)
• Élie Cartan Prize
Note some are after Henri Cartan, a son of É. Cartan; e.g.,
• Cartan's lemma (potential theory)
• Cartan seminar
• Cartan's theorems A and B
• Cartan–Eilenberg resolution
| Wikipedia |
Neural implementation of computational mechanisms underlying the continuous trade-off between cooperation and competition
Neural computations underlying strategic social decision-making in groups
Seongmin A. Park, Mariateresa Sestito, … Jean-Claude Dreher
The neuroeconomics of cooperation
Carolyn H. Declerck & Christophe Boone
The prefrontal cortex and (uniquely) human cooperation: a comparative perspective
Yoonseo Zoh, Steve W. C. Chang & Molly J. Crockett
The neural and computational systems of social learning
Andreas Olsson, Ewelina Knapska & Björn Lindström
What matters for cooperation? The importance of social relationship over cognition
Rachel Dale, Sarah Marshall-Pescini & Friederike Range
A dual-fMRI investigation of the iterated Ultimatum Game reveals that reciprocal behaviour is associated with neural alignment
Daniel J. Shaw, Kristína Czekóová, … Milan Brázdil
Dynamic modulation of social influence by indirect reciprocity
Joshua Zonca, Anna Folsø & Alessandra Sciutti
Infants rationally decide when and how to deploy effort
Kelsey Lucca, Rachel Horton & Jessica A. Sommerville
Oxytocin modulates social value representations in the amygdala
Yunzhe Liu, Shiyi Li, … Yina Ma
M. A. Pisauro ORCID: orcid.org/0000-0001-6087-25971,2,3 na1,
E. F. Fouragnan ORCID: orcid.org/0000-0003-1485-03321,3,4 na1,
D. H. Arabadzhiyska ORCID: orcid.org/0000-0001-9568-56603,
M. A. J. Apps1,2 na2 &
M. G. Philiastides ORCID: orcid.org/0000-0002-7683-35063 na2
Nature Communications volume 13, Article number: 6873 (2022) Cite this article
Social interactions evolve continuously. Sometimes we cooperate, sometimes we compete, while at other times we strategically position ourselves somewhere in between to account for the ever-changing social contexts around us. Research on social interactions often focuses on a binary dichotomy between competition and cooperation, ignoring people's evolving shifts along a continuum. Here, we develop an economic game – the Space Dilemma – where two players change their degree of cooperativeness over time in cooperative and competitive contexts. Using computational modelling we show how social contexts bias choices and characterise how inferences about others' intentions modulate cooperativeness. Consistent with the modelling predictions, brain regions previously linked to social cognition, including the temporo-parietal junction, dorso-medial prefrontal cortex and the anterior cingulate gyrus, encode social prediction errors and context-dependent signals, correlating with shifts along a cooperation-competition continuum. These results provide a comprehensive account of the computational and neural mechanisms underlying the continuous trade-off between cooperation and competition.
In social interactions many species, including humans, often behave competitively – acts aimed at obtaining a resource at the expense of another benefitting—or cooperatively—acts aimed to benefit both self and other. Although people and animals commonly alternate between cooperative and competitive behaviours for access to resources, territories, and status1,2,3,4,5,6,7 we are still lacking an integrated understanding of how the brain controls and arbitrates over the continuous trade-off between cooperation and competition, and more specifically, which neural mechanisms and computational principles are involved.
Classically, cooperation and competition have been treated as alternative social orientations, whereby one acts either cooperatively or competitively at any point in time8. In both one-shot and multi-rounds economic games, such predispositions are typically measured with social dilemmas requiring binary choices where people either cooperate or compete with a partner9,10,11. Yet, in the real world, behaviour is not so dichotomised. The common descriptions of people as being "fully cooperative" or "highly competitive" highlight that these behaviours are considered along a spectrum, and what may matter for social behaviour is one's degree of cooperativeness or competitiveness. But, how do people decide upon their degree of cooperation or competition? And how do they adjust it over time?
Broadly speaking, previous research using games with binary choices suggests that cooperativeness is shaped by three factors: (i) the environment, where the availability of resources and their distribution shape choices12, encouraging cooperation in rich and fair environments13 while favouring competition when resources are scarce and unevenly distributed14,15, (ii) personal predispositions and inherent social biases shaped by psychological traits16,17,18 and (iii) how dyads interact with each other, with cooperation favoured by reciprocity19 and the evolution of trust in repeated interactions20,21 and the spread of reputational information within groups22,23,24.
In economic games, environments are manipulated by the "payoff matrices" where changing the rewards available to each member of a dyad within an interaction influences behaviour, with people making more choices to act competitively or cooperatively when the payoff matrices favour it25. However, although economic theories assume people will eventually settle upon an optimal equilibrium, this is not always the case26,27,28. People have tendencies and psychological traits that lead to biases towards being more cooperative or more competitive in general, regardless of the payoff matrix.
Moreover, people's behaviour is determined by the psychological processes engaged when monitoring the behaviours of others. We monitor others' behaviour, and use mentalizing processes to infer their intentions, and adapt our cooperativeness accordingly29. At the core of this mechanism is the rewarding property of reciprocity in repeated interactions19,30, which emerges through social learning driven by social prediction error signals31. However, to date, a formal account that unifies these features together and thus predicts someone's degree of competitiveness has not been forthcoming.
Research is increasingly showing that people's biases in social behaviour, and continuously updating inferences about others, can be captured by computational models, including those based on Bayesian principles32,33,34,35,36. In such accounts, model parameters can capture biases and people's expectations of others' behaviours which are updated by prediction errors (the surprise associated with the discrepancy between a prediction about another's action and their actual behaviour). Such Bayesian models have captured how people respond to the changing trustworthiness of other's advice and to behaviour in iterative economic games where people make binary choices32,37. Here, we propose to use Bayesian models to account for how people move along a cooperation-competition continuum based on their expectations of reciprocity of the co-player, their inherent social bias, and the incentives of the social environment.
Strikingly, regions of the brain that have been implicated in representing cooperative and competitive behaviours have been shown to do so by processing social prediction errors, that lead to an update in whether people behave cooperatively or competitively. In particular, portions of the temporo-parietal junction (TPJ), medial prefrontal frontal cortex (mPFC), anterior cingulate gyrus (ACCg), and portions of the anterior cingulate and paracingulate sulci corresponding to areas 24, 32 and 8 are all engaged when processing the competitive or cooperative behaviours of others38,39,40,41,42. The same regions have also been shown to signal prediction errors when monitoring others' behaviours, and in tasks requiring inferences to be made about the actions of others32,37,43,44,45,46,47. However, how these regions process information about the social context and use Bayesian signals relating to the cooperativeness of others to influence one's own degree of cooperation is poorly understood.
To test the notion that people behave on a continuum between cooperation and competition, we designed a new social game called the Space Dilemma. This game capitalises on a well-known economic principle controlling the spatial location of competitors in duopoly48 and generalises to a continuum the trade-off between cooperation and competition which is dichotomised in the Prisoner's Dilemma49,50. In the game, two players decide whether and how much to compete or cooperate with each other, by positioning themselves in different locations of a continuous space, whereby each location is rewarded differently on a trial-by-trial basis. These decisions take place over multiple trials in three blocks with payoff matrices creating different social contexts that encouraged different degrees of cooperation and competition: (i) cooperative—where both players receive an equal amount of the reward, irrespective of who is best positioned (ii) competitive – where the best positioned player wins a reward while the other player incurs in a proportional loss and (ii) intermediate—where one player receives the reward and the other receives nothing. In each of these conditions, the best strategy would be to cooperate but with different, increasing risks associated with the defection of the co-player. Thus, to maximise rewards, players must consider what the optimal location is, but also infer the intentions of the other player, predict their level of cooperativeness and adapt one's location accordingly.
Here, we tested 27 pairs of participants playing the Space Dilemma whilst one in each pair underwent fMRI. We predicted that people would adapt their locations according to a general bias in cooperativeness, a shift in competitiveness across social contexts, but also trial-to-trial shifts in cooperativeness depending on the actions of the other player. We hypothesised that sub-regions of the TPJ and mPFC linked to processing information about others would signal (i) the degree of bias one has across the social contexts, (ii) prediction errors relating to the surprise associated with the other player's competitiveness and (iii) signal the degree to which one is updating one's behaviour due to the other player's competitiveness.
We show that people's behaviours are best predicted quantitatively by a Bayesian learner informed about the risk of losing and winning in each context, which also constantly updated behaviour based on the actions of the other player. We show that surprise signals are coded within clusters in the TPJ, in an unsigned manner in the posterior TPJ, but in a signed manner that correlated with updating subsequent behaviour in the anterior TPJ. In addition, distinct regions in mPFC, the ACCg and in the paracingulate sulcus carried information about participants' increases in cooperativeness and the degree to which they used trial-by-trial information about the other player, suggesting important roles in shifting behaviour along the continuum away from the default behaviour induced by the social context. These results provide a comprehensive characterisation of how the brain monitors and controls the continuous trade-off between cooperation and competition.
The space dilemma
Pairs of participants, one inside the fMRI scanner and one in an adjacent room, played the game. All participants were told to imagine that they were foraging for food in a territory and were asked to make a prediction about the position of the food in a linear space (a straight line that represents the territory, Fig. 1a left panel). They were told that the target "food" would appear somewhere in the territory as its position was randomly sampled from a uniform distribution. They were then presented with a bar moving across the space (representing their location) and were required to commit to a certain location by pressing a button while the bar was moving in the linear space. This location would signal their prediction about the target position. Each player made her/his predictions and watched the other player's response. After the two players responded, the target appeared. On any trial, the participant who made the best prediction (closer to the target) won and got a reward which depended on the distance to the target: the lower the distance, the higher the reward (Fig. 1a).
Fig. 1: Schematic representation of the Space Dilemma.
a Participants first positioned themselves in the space, hidden from the other player. They were then presented with a bar moving across the space (representing their location) and were required to commit to a certain location by pressing a button while the bar was moving through it. The bar would take 4 s to reach the end of the space. Once they responded, the bar stopped at the chosen location and was shown for the remainder of the 4 s. After both counterparts positioned themselves, their respective positions were shown to each other for 1–1.5 s before the target appeared (left panel). The player closer to the target won the trial (three examples in right panel) as identified by the colour of the target. The reward obtained is inversely proportional to the distance to the target, and reflected by the size of the target square. b The average reward for each player depends on the position in the territory. In each panel, the colour intensity represent the average reward obtained playing that position over many trials. In individual settings (top panel), the best strategy—to minimize distance to the target and maximize rewards - is to target the middle of the space. However, in the two-player space dilemma, as deployed here, multiple configurations exist. Fully cooperative behaviour involves both players positioning themselves in the midpoint of each hemifield, which minimizes the average proximity to any possible location of the target, thus maximising gains (second panel from the top). As this strategy is not a Nash equilibrium, players may have the incentive to deviate from their half side and thus cover more territory (third panel from the top). As such, any positioning closer to the midpoint can be defined as more competitive behaviour. When both players are highly competitive they both target the midpoint, winning less on average (bottom panel).
As the target location is uniformly distributed across the space, if only one player would play the game, the optimal location to minimise the distance from the target and therefore maximise the average reward is the midpoint (Fig. 1b top panel, supplementary Fig. 1). With two players, the average total reward is maximised when the players cooperate, by occupying the mid points of the two hemifields (Fig. 1b second panel, supplementary Fig. 1). However, one player might be tempted to occupy the midpoint, as this would maximise their own personal expected reward, at the expense of the other. As such, the closer a player gets to the midpoint, the more competitive his or her behaviour (i.e., less reciprocity towards the co-player's cooperation, Fig. 1b third panel). Crucially, this competitive behaviour would lower the total reward over all trials because when the target falls within that player's hemifield he/she has a higher probability of being further from it, thereby earning a smaller reward. Similarly, if both players choose to compete by trying to maximise their individual chance of winning going for the midpoint, they would expect to obtain the same reward, albeit reduced compared to the optimal locations when cooperating (Fig. 1b fourth panel).
We manipulated the social context by controlling the reward distributions (as determined by the α parameter, see Methods and Fig. 2a). We defined a cooperative context as one where participants shared the reward irrespective of the winner (α = 0.5, Fig. 2a), and a competitive context in which losing a trial is associated with an economic loss whilst the winner sees its reward boosted by the same amount (α = 2; Fig. 2a). An intermediate context was defined as one where the winner takes all the reward, while the loser in each trial did not receive neither a benefit nor a loss (α = 1; Fig. 2a). To behave adaptively in the task, participants had to change their strategy according to both the co-player response and the social context.
Fig. 2: Game structure and behavioural results.
a An α parameter determines the social context and thus the amount that each player receives on each trial. The experimental design contained three social contexts that were hypothesised to shift people's competitiveness. In all contexts, the position of the closest participant to the target determined the total reward won. In the first context (cooperation), this reward would be equally shared among both players. In the second - intermediate - context, in every trial the winner takes all the reward available. In the third context, the closest player to the target wins twice the reward while the loser loses the reward from its endowment. b, c The strategy adopted by most participants in the cooperative context was to cooperate, and in the competitive context, to compete. In the intermediate context, participants exhibited variable responses. Responses are presented as their joint position on the x-axis (b) or over time (c). Kernel densities are presented on the right of each plot. Mean (bold line) and standard error (shaded area) are displayed across participants. d Average deviation ΔP (change from previous position) in a trial as a function of the co-player deviation in the previous trial. Each dot represents a participant. The co-player deviations are binned into large and small increases in cooperation/competition. In all contexts there is a tendency to reciprocate the co-player changes of behaviour in the next trial (tit-for-tat). This is particularly evident in the intermediate context, where participants were sensitive also to small increases in competition.
Whilst the best long-term strategy to increase the total reward for the dyad in each context, unknown to the participants, was to always cooperate (supplementary material and Supplementary Fig. 1), this was not always the optimal strategy for individual players (which also depends on the co-player choices and is susceptible to end-game effects as the number of trials is finite) and the reward distribution favoured different level of competition in different contexts. This is because while in the cooperative context there is no benefit in competing, as the reward is equally shared between players, in the competitive and intermediate contexts players have a temptation to win the trial, to avoid a loss and to boost their reward. The manipulation of the reward distribution increases the risk associated with losing by increasing the difference in reward by winner and loser (increasing α; Supplementary Fig. 1c). It is worth noticing that in the competitive and intermediate contexts, the space dilemma is a probabilistic form of the Prisoner's Dilemma (see Supplementary Fig. 1 and methods). Each pair of participants played three block of 60 trials for each of the three contexts (cooperative, intermediate and competitive social contexts). Beyond the reward distribution shown at the start of each block of trials and the variability in players' behaviour, the three blocks of trials were visually identical, differing only in the underlying social contexts. This setup therefore allows to compare a range of cooperative and competitive behaviours across different social contexts while controlling for the sensory-motor aspects of the decision.
Cooperativeness is shaped by the social context and the interactions within dyads
We hypothesized that participants would base their behaviour on (i) personal predispositions, (ii) the social context and (iii) the behaviour of the co-player. To demonstrate the effect of the social context, we averaged together the players' positions on different sides of the midpoint by computing the absolute distance from the closest edge, a measure of competitiveness. There was substantial variability in behaviour across all conditions, suggestive of widespread individual differences across participants (Supplementary Fig. 2). As expected, we found that the social context had a significant effect on both the average cooperativeness of players (β = −0.12, P < 0.001; fixed effect of condition in a linear mixed model predicting the average cooperativeness based on player and condition, see methods and Supplementary Fig 2a, b) and the absolute distance between players (β = −0.25, P < 0.001, fixed effect of condition in a linear mixed model predicting the average distance across players based on dyads and conditions, see methods and Supplementary Fig. 2c) suggesting that increasing the benefit of competing in a social context increased the players' competitiveness (reduced the distance from the midpoint) and reduced the distance among them in the space. This increase in competitiveness across contexts brought about a significant decrease in the reward collectively accumulated by the dyads (β = −2.41, P < 0.05, fixed effect of condition in a linear mixed model predicting the dyads reward based on dyads and contexts, see methods and Supplementary Fig. 2d) but had no significant bearing on rewards accumulated by individual participants (β = −1.16, P = 0.26), consistent with the fact that competition is suboptimal for the dyad even in the competitive context while the effect on individual participants can be both positive or negative (Supplementary Fig 1 and supplementary results).
In the cooperative context, players were behaving cooperatively—positioning themselves towards the middle of one of the hemifields and sticking to one side, with a mild but significant shift towards the optimal location as time progressed (β = 0.00086, P = 0.03, fixed effect of trial number in a linear mixed model predicting the distance between players based on dyads and trial number, Fig. 1b, c left column, supplementary Fig. 1a, b, c). Conversely, in the competitive context, participants exhibited more competitive behaviours by positioning themselves closer to the middle of the space (Fig. 1b right) and maintaining the position during the course of the block of trials (Fig. 1c right, supplementary figure 1c). In the intermediate context, participants exhibited a range of cooperative and competitive behaviours (Fig. 1) with a significant shift from the former to the latter and convergence towards the centre as the interaction progressed (β = −0.01, P < 0.005, fixed effect of trial number in a linear mixed model predicting the distance between players based on dyads and trial number, see methods and Fig. 1e mid column).
Having confirmed that players' behaviours were contextually-driven, we moved on to test whether they were also driven by the co-players' behaviour. We hypothesized that participants' behavioural variability in each context could be partly explained by their co-player's behavioural variability, i.e., by deviations from their expected locations. We first looked at how players responded, on average, to changes in positions of their co-player. We grouped all changes in position from one trial to the next into 4 bins, i.e., small and large increases in cooperation or in competition. These were defined with respect to the relative position of the players: if they were converging towards the centre, this was an increase in competition. If they were instead moving away from the centre, this was an increase in cooperation (see Methods).
For all contexts, we saw that players on average reciprocated the changes of the co-player in the previous trial: if the co-player became more cooperative by moving away from the midpoint, so did the players in the next trial, whereas if the co-player became more competitive, players converged to the midpoint in the next trial (Fig. 2d). This effect was further modulated by the size of the co-player change in position: on average, larger changes of position of one player resulted in larger reciprocal changes from the co-player in the next trial (β = 0.03, P < 0.001, fixed effect of bin number in a linear mixed model predicting the change of position of player 2 based on bin number, condition and pairs identity, see methods and Fig. 2d). Furthermore, it was modulated by the context, being less pronounced in the competitive context (β = −0.015, P < 0.001, fixed effect of the interaction between bin number and condition in a linear mixed model predicting the change of position of player 2 based on bin number, condition and pairs identity, see methods and Fig. 2d). This finding suggests that players' behaviour followed a tit-for-tat strategy: they were inferring the intention of the other player, and predicting where the other player would position themselves, retaliating against co-players' increases in competition, and reciprocating co-players increases in cooperation. These effects were further modulated by the social context.
Cooperativeness conforms to a Bayesian model
To model the behaviour in the game, including potential effects of social biases, co-player's behaviour and context on people's cooperativeness, we fitted eighteen different models (see Methods for further details). We compared different classes of models based on different principles. The first class of models is based on the assumption that players decide their behaviour purely based on the behaviour of their counterpart, by reciprocating either their last position, their last change in position, or a combination of the two. This class of models assumes players behave in a simple reactive fashion, "titxtat" kind of behaviour, irrespective of the social context (denoted "Simple models" in Fig. 3d). A second class of models goes further in assuming that what is reciprocated is not the position of the co-player in the last trial but rather the expected position (yet unobserved) in the current trial and that the amount of reciprocation is modulated by the social context. At their cores, they all assume that a player learns to anticipate the co-player's position in a fashion that is predicted quantitatively by a Bayesian learner carrying out the same task ("Bayesian models" in Fig. 3a–d). They also assume that this expectation is reciprocated in a titxtat fashion. However they differ in how this expectation is mapped onto a choice, allowing for different degrees of influence of the context, their counterpart's behaviour and the player's own bias. A third class of models assumes that participants were choosing what to do based not only on the other player behaviour but also on the outcome of each trial, with different assumptions on how winning a trial should change their behaviour in the next (becoming more or less cooperative). This class of models is effectively assuming that the player behaviour would be shaped by the reward collected ("Reward models" in Fig. 3d). We used formal Bayesian model comparison (see Methods) to identify the best-fitting model (Fig. 3d). The winning model is a Bayesian model and contained features that accounted for both people's biases towards cooperativeness, how the behaviour of the other player influenced subsequent choices and the influence of the social context.
Fig. 3: Model predictions.
a Our best model described how player 1's (P1) choices resulted from (i) a tit-for-tat strategy whereby P1 reciprocated player 2's (P2) anticipated position ("Exp_PosP2") scaled by a context-dependant tit-for-tat factor (ii) P1's social preference or bias ("SocialBias") and (iii) a precision parameter capturing players' ability to choose the desired location. b Representative examples of 3 trials during which P1 learned to anticipate P2's position following Bayesian learning: observing P2's position on a given trial updates P1's belief about P2's strategy. c Top figure: Single participant representative example of P1's positions explained by the anticipated position of P2 following the Bayesian learning procedure described in b. Coop: cooperation; comp: competition; intermediate context as described in fig1. Bottom figure: Population averages. In the inset, anticipated position of the target vs actual position: as expected, a Bayesian learner cannot predict the position of the random target. Bayesian models B4-B5 included a prediction about the target location. d Bar plots illustrating the results of the summed integrated Bayesian Model Selection. Lower BIC scores indicate better fit. Models are divided into three classes, 'Simple' (S1–4), 'Bayesian' (B1–8) and 'Reward' (R1–6) based on their underlying logic (see text). The Bayesian model B6 with context-modulated tit-for-tat and "SocialBias" performs best (BIC = 5553). e Scatterplot showing linear correlation between empirical and predicted choice positions (r: Pearson's correlation coefficient, N = 4 bins x 50 participants = 200. One-sided P of correlation as large as r). For each participant, positions were binned in four bins and the average model prediction for each bin was computed. Grey dots are individual participant bin averages. Red dots are population averages. Grey lines reflect individual participant fits, the red line is the fit of the population averages. f Scatterplot showing linear correlation between the "tit-for-tat" and social bias parameter. Each dot is a participant (N = 50). g Scatterplot showing the linear correlation between the precision parameter and the individual behavioural precision estimated by the inverse of the standard deviation of P1's positions observed during the game. Note that all participants served as P1 in the analyses (N = 50).
All Bayesian models significantly outperformed both the simple reactive models and the reward-based ones. To validate this modelling approach and confirm that players were trying to predict others' positions rather than just reciprocating preceding choices, we ran a regressions model to explain participants' choices based on both the last position of the co-player and its Bayesian expectation in the following trial. We found that expected positions were significantly better predictors than preceding choices (see Supplementary Fig. 6b). Both these pieces of evidence point to the fact that whilst players implement tit-x-tat strategies, they do so in a way that considers all past behaviour of their co-player, effectively discounting their latest choice with prior decisions, therefore being more robust to single, potentially accidental, deviations if there was a consistent history of cooperation.
Specifically, the winning model (B6) implemented (i) a "tit-for-tat" strategy whereby the first player reciprocated the co-player's expected choice in their own hemifield by cooperating by the same amount scaled by a "TitxTat" factor (Fig. 3a); (ii), this factor was determined by a parameter normalized by a context-dependent factor inversely proportional to the increase in social risk associated with the redistribution parameter α, i.e., the higher the redistribution, the lower the risk associated with losing, the higher the TitXTat factor (Fig. 3a, f); (iii) a social bias parameter determining individual inherent preferences towards competing or cooperating ("Social Bias"; Fig. 3a, f); (iv) a parameter capturing players' Precision (e.g., players may press the button too early or too late compared to the location they aim for), increasing their variability in behaviour beyond the one that can be explained by the social context and the co-player behaviour. Thus their actual choice is normally distributed around the "titXtat + SocialBias position" (with the standard deviations being a model parameter). Two other Bayesian models (B7-B8) fitted the data slightly better than model B6. These models used an additional parameter to estimate the probability that a co-player might "betray" by arbitrarily becoming more competitive. This probability is estimated in a Bayesian fashion based on the history of unexpected deviations. However, the inclusion of the extra parameter (which increases the BIC) is not justified by a small improvement in negative log likelihood suggesting that it is unlikely that our players encoded the probability of betrayal independently of the effect of context (which makes participant more cautious—less cooperative—anyway). In any case these models are inherently similar and make very similar behavioural predictions, since they share the same Bayesian architecture and three free parameters.
We found that observed and predicted positions from the winning model were significantly correlated (Fig. 3e, Pearson's correlation coefficient r = 0.91, P = 1 × 10−6, see Methods and Supplementary Fig. 3a, b for individual and averaged participants fit). Moreover, note that all parameters of the winning model fitted to behaviours revealed significant variability (Supplementary Fig. 4c). This is important because we can then explain the variability in our participants' responses with the variability captured by our model parameters. As such, using a regression analysis, we found that the precision fit by the model (see Methods) was significantly correlated with the variance of player 1's positions observed during the game (β = 6.64, P = 1 × 10−6, Fig. 3g).
We subsequently examined the relationship between the other parameters. If participants vary in how much they adapt to the other players, but also vary in their initial bias along the cooperation-competition spectrum, we would predict a relationship between participants' parameters in the model. Strikingly, we found a strong negative correlation between participant's TitxTat parameters and their social biases (r = −0.62; β = −3.79; P < 0.05), suggesting that participants distributed along an axis, with, on one end, participants who were more inclined to be cooperative irrespective of what the other player was doing, and, on the other, participants whose behaviour was more flexible and dependant on the co-players' behaviour. Importantly this anticorrelation was not derived from the specific model we used. This anticorrelation between social bias and TitxTat was also found when fitting a simple linear model which was predicting players' positions based on the co-players' position and a constant term: the linear term was anticorrelated with the constant term for all conditions (β = −0.62, P < 0.001 for condition 1; β = −0.95, P < 0.001 for condition 2; β = −1.88, P < 0.001 for condition 3), suggesting that the trade-off between pro-sociality and tit-for-tat like behaviour is a feature of participants' behaviour, that can be accounted for by the model.
Different portions of TPJ encode a social prediction error
We hypothesised that people will change behaviour based on the social context, will show a range of social biases and will update their behaviour based on their interactions with the other player. Having demonstrated that a Bayesian model can capture such behavioural effects in the Space Dilemma, we next examined if neural signals might similarly reflect the model. Within the model, a key component is tracking the behaviour of the other person, that is, predicting how competitive someone is and then observing the other person's behaviour. In a Bayesian framework such tracking occurs through Kullback-Liebler divergence (KLD), which quantifies a social prediction error—the difference between the expected location of the other player and their actual location (Fig. 4a, top panel). Given previous evidence that unsigned prediction errors (the absolute magnitude of the error or "surprise" regardless of direction) and signed prediction errors (positive when something is higher than expected and negative when something is lower than expected) may be dissociable51,52,53,54, we included in our main GLM (see methods) two parametric regressors coding the unsigned (magnitude of difference between expected location of P2 and actual location) and signed KLD (positive magnitude when P2s location is more cooperative than expected and negative when P2s location is more competitive than expected, Fig. 4a bottom panel) and examined responses time-locked to when the other player response was revealed to the participant in the scanner (Fig. 4a top panel in blue).
Fig. 4: The TPJ encodes all dimensions of a social prediction error and predicts future behaviour.
a Left panel: The magnitude of the KLD represents the absolute prediction error and this is to what extent P2 position deviated from what was expected. This is a learning signal informing how much the P1 needs to adjust his position on the next trial. Right panel: The sign of the KLD represents the direction of the violation of expectation: was P2 more cooperative than expected or more competitive? b One whole-brain analysis parametrically tested for voxels where activity correlated with the trial-by-trial estimates of (i) the KLD magnitude (in red) and (ii) the KLD sign (in blue). c Average (+/− SEM) population βs for GLM 3 in prTPJ (left) and arTPJ (right) across four groups of trials binned based on their value of KLD magnitude and sign (ordered from positive prediction errors signalling big increases in cooperation—PE++ - to negative prediction errors signalling big increases in competition—PE–). In the insets, the βs from GLM 1 show that prTPJ and arTPJ are encoding KLD and KLD sign across all contexts. d Times series analyses revealed that, depending on whether, in the next trial, the participant became more cooperative or competitive (measured through the sign of the change in position from the previous trial), the activity in arTPJ would be different, with a higher signal when participants became more cooperative in the intermediate condition. Traces are population averages (+/− SEM).
A whole-brain analysis revealed significant activity in the right TPJ reflecting both components but in distinct sub-regions. The unsigned prediction error was represented in a posterior portion of the right TPJ (Fig. 4b, prTPJ Z = 4.40, MNI: x = 52, y = −58, z = 30) while the signed prediction error was encoded in a contiguous cluster in the anterior part of the right TPJ (Fig. 4b, arTPJ; peak Z = −3.67, MNI: x = 50, y = −38, z = 32). Both regions survived multiple comparison correction (Fig. 4b; Z > 3.1 cluster forming threshold, whole-brain cluster-based correction P < 0.05; GLM1). To test the full parametric effect of the two clusters in TPJ we run a control GLM (see methods) to test how their activity varies across four different groups of trial split based on the KLD value and its sign. This ROI analysis revelead that arTPJ activity increased with value of prediction errors signalling increases in competition of the co-player (Fig. 4c, right) whilst activations in prTPJ show a u-shaped relationship (Fig. 4c, left), providing additional independent evidence that these two sub-clusters in TPJ encode the sign and the absolute value of the prediction error, respectively. Additionally responses to the magnitude of the Social Prediction Error were also found in the Inferior Frontal Gyrus (IFG; 50, 16, 14), Middle Frontal Gyrus (MFG; 44, 16, 40), bilateral Insula (INS; ±34, 22, −4) and in the Middle Temporal Gyrus (56, −30, −10/60, 4, −24). In all contexts (see insets, Fig. 4c), these regions appear to encode signals linked to the surprise occurring when observing the co-player's location (and thus the degree of competitiveness experienced) and contrasting it to the expectation based on their past behaviour.
The results above suggest that two regions of the TPJ may encode the surprise and signed prediction error associated with the other player's competitiveness in all contexts, but does this relate to how participants changed their behaviour? To address this question, we looked at how trial by trial changes in the amplitude of the neural signals were linked to behavioural changes in the following trial. We extracted the time-courses of the BOLD (Blood-Oxygen-Level-Dependent) signal in these two TPJ regions at the time of the other player's response and examined whether signals on a trial were predictive of a change in behaviour (an increase or decrease in competitiveness) on trial (t + 1) (see Methods). We found a correlation between change in behaviour and signals in the arTPJ (P < 0.001, t-test fMRI betas at corresponding time points Fig. 4d) in the intermediate context. This condition is the one in which there is the greatest variability from trial to trial in behaviour and thus where monitoring the other player's responses to guide one's own is the most important. Thus, whilst the prTPJ signals how surprised one is about another's competitiveness, the arTPJ encodes a directionally specific prediction error that is predictive of a future change in cooperativeness in the context where it is most important for people to understand when to do so.
Social context modulates how posterior dorsomedial frontal, cingulate and paracingulate cortices encode updates to cooperativeness for self and other
Whilst the TPJ encoded surprise signals across all social contexts, how does the social context modulate neural activity in order to update behaviour? To test this we compared the neural activity across social contexts to identify whether any regions were engaged differently when changing cooperativeness or monitoring another's changes in cooperativeness. We looked at the contrast between the two extreme social contexts: the cooperative and competitive ones. In our main whole-brain analysis (GLM I, see methods) we parametrically tested which parts of the brain encode (i) P1's changes in the level of cooperation at time of decision (self coop) and (ii) the sign and magnitude of the social prediction error (P1's surprise about P2 changes of position—another coop) at the time when P2 position is revealed. We then ran a contrast analysis between the cooperation and competition contexts. All the reported activation clusters were identified with an uncorrected threshold of P < 0.001 and corrected for family-wise error (FWE) at the cluster level at P < 0.05. GLM II; Fig. 5.
Fig. 5: Social context modulated medial frontal, cingulate and paracingulate cortices encoding of changes in self and others cooperativeness.
a This whole-brain analysis parametrically tested for voxels where the social context modulated how activity correlated with the trial-by-trial estimates of P1's changes in the level of cooperation at the time P1's response. b This whole-brain analysis parametrically tested for voxels where the social context modulated how activity correlated with the trial-by-trial estimates of the sign of the social prediction error (P1's surprise about P2 changes of position) at the time when P2 position is revealed. c βs showing the strength of ACCg and PaCg encoding of KLD sign across social contexts (from cooperative to competitive; bars represent population averages, N = 25). d Time series analyses revealed that, depending on whether, in the next trial, the participant became more cooperative or competitive, the activity would be different, with a higher signal when participants became more cooperative in the intermediate condition. Traces are population averages (+/− SEM).
We found a significant difference in activity in a cluster in the posterior portion of the dorsomedial prefrontal cortex (pDMPFC) extending posteriorly across the pre-SMA and inferiorly to the cingulate cortex (pDMPFC, Z = −4.09, MNI: x = −8, y = 16, z = 52), where individual changes in cooperation levels where encoded differently between the cooperative and competitive contexts at the time when the participant was choosing how cooperative to be (self coop; Fig. 5a). Additionally, we found significant difference between cooperation and competition in the activation of an area in the Superior Frontal Gyrus (SFG; MNI: Z = −3.54, 28, 6, 56), in the right Insula (INS; MNI: Z = −3.85, 30, 26, 0) and in the Precuneous (PC; MNI: Z = −3.95, −6, −56, 56).
Furthermore, we found two regions which showed a significantly different activation between cooperation and competition for the sign of the social prediction error. The first was lying in the anterior cingulate gyrus (ACCg, Z = −3.13, MNI: x = 0, y = 34, z = 20, Fig. 5b) while the second was approximately located in the paracingulate gyrus in Broadmann area 32 extending to the cingulate and anterior dorso-medial prefrontal cortex (PaCg, Z = −3.36, MNI: x = 2, y = 50, z = 12)—both were active at the time in which the opponent response was revealed, showing a significant difference in the way it encoded the sign of the social prediction error between the competitive and cooperative contexts.
Interestingly, these last two areas positively signalled increases in cooperativeness of the co-player during the competitive context, but it negatively signalled increases of cooperativeness during the cooperative context (Fig. 5c). The results above suggest that these areas not only encode the sign of social prediction errors, but they do it differently in different social contexts. If the social context modulates their activity, it is possible that cingulate and paracingulate cortices, contrary to the arTPJ, might have different roles in different contexts. How does this relate to behaviour?
To address this question, we looked at how trial by trial changes in the amplitude of the neural signals in these two clusters were linked to behavioural changes in the following trial. Once again, we extracted the time-courses of the BOLD signal in these two regions at the time of the other player's response and examined whether signals during a given trial (t) were predictive of a change in behaviour (an increase or decrease in competitiveness, i.e., change in location) on the subsequent trial (t + 1) (see Methods). For both region, we found a correlation between their activity and change in behaviour (P < 0.001, t-test fMRI betas at corresponding time points Fig. 4d) in the intermediate and in the competitive condition. Interestingly, and consistently with the idea that the social context modulates their role, both areas appear to predict increases in cooperation in the intermediate condition and increases in competition in the competitive condition. Thus, both clusters are predictive of a future change in competitiveness but in different ways in different contexts.
Finally, we reasoned that if these areas are significantly involved in determining behaviour, this should be reflected by the parameters of our model. We therefore correlated the parameter representing the social bias, capturing the degree to which participants' behaviour was biased towards cooperation with the average betas of the two clusters for the sign of the social prediction error at the time the co-player response is revealed. We performed the same analysis for the player increases in cooperation at time of response. Similarly, we did the same for the titXtat parameter, capturing the degree to which participants' behaviour was determined by the attempt to reciprocate the level of cooperation of the co-player.
Intriguingly, we found that the representation of increases of cooperation for self positively correlated with the social bias parameter and anticorrelated with the titXtat parameter for both clusters (Supplementary Fig. 5e). Furthermore, a detailed analysis of how the representation of self and other cooperation changes across contiguous clusters in ACC backed up by correlation with model parameters lend some evidence to the existence of a self-other gradient along the rostro-caudal axis within ACC (Supplementary results and supplementary Fig. 5d–f). Taken together, these results provide strong evidence that the cingulate and paracingulate cortex are instrumental in adjusting behaviour in response to the action of partners/competitors and according to the social context.
Competition and cooperation are two social orientations that can either hamper or facilitate individual achievements. While traditionally cooperation and competition have been studied separately, they are not all or nothing but occur along a continuum. Using a new economic game, modelling and fMRI we revealed some of the computational and neural mechanisms controlling the trade-off between competition and cooperation. Using a continuous spatial location as a parametric measure of the cooperation-competition continuum, we showed that our new paradigm allows to explore a range of cooperative and competitive behaviours and to compare them across different social contexts while controlling for the sensory-motor aspects of the decision.
We showed that people's degree of cooperativeness is shaped by (i) what the social context favours, (ii) the nature of the interaction between two individuals and (iii) predispositions towards cooperativeness regardless of the context or the other player's behaviour. These patterns of behaviour were captured by a Bayesian model, which included parameters weighting the social context, the participants' social bias, and dictating how much the other player's actions were influencing one's own. Our results point to the important role of the rTPJ in coding social prediction errors that lead to subsequent changes in competitiveness. We also found that distinct regions of the medial prefrontal, cingulate and paracingulate cortices coded information linked to how the social context and one's social bias shape people's own cooperativeness as well as the monitoring of other people's one.
Understanding which social contexts facilitate cooperative behaviours is of paramount importance for human societies, both to increase well-being and reduce conflicts. Conversely, understanding how to control and constrain competitive behaviours can be beneficial to improve the performance of a group and its benefit to the wider society. Much research has investigated cooperative behaviour in games like the Prisoner's Dilemma, the Chicken's Game and the Trust Game9,11,31. Conversely, competition has been studied in zero-sum games like the matching pennies55,56,57,58. While a few experimental and theoretical studies have examined non-binary versions of Prisoner's Dilemmas22,59,60 and explored the impact of changing the cost/benefits of cooperation61, very few studies have attempted to directly compare these two social orientations and those that have done so62,63, did not consider that their trade-off occurs along a continuum. To our knowledge, this study is the first one to propose a continuous parametrization of the competition-cooperation axis to study the computational and neural mechanisms underlying the continuous trade-off between the two strategies.
Studying the continuous nature of the cooperation-competition trade-off is important for several reasons. First, our paradigm affords a richer and more flexible behavioural repertoire of social approaches, as it allows observing fine-tuned changes in behaviour that would otherwise remain "latent" in a binary setting. This is particularly important in the context of social interactions, as minor adjustments of behaviour are observable and can lead to shifts in strategy, inducing social dynamics that could remain undetected in a binary setting. For instance, in our intermediate condition we see the players slowly drifting towards the more competitive position, due to a cycle of fine adjustments reflective of a combination of titxtat and the rational incentive to win the game. Second, a continuous set up is important to identify the neural activity underpinning the behaviour. In binary choice tasks, both strong and weak intentions to cooperate might be reflected in identical choices being made before a sudden shift in policy occur. This would bear the inference of a large prediction error, obfuscating the latent drift towards a new social orientation. Thus the gradation of prediction error values that best accounted for shifts in people's behaviour and intentions to compete would not be well predicted by a single observed behaviour.
Decades of research have linked the TPJ to the processing of social information, both in cooperative and competitive contexts63,64,65. Classical accounts have posited that this region had a role in distinguishing one's own mental states from others, as revealed by the BOLD response changing when another's beliefs are revealed to be erroneous and different from one's own40. More recent theoretical work has argued for a social predictive coding framework for theory of mind postulating errors signals in TPJ66 and previous studies have reported evidence of rTPJ encoding of both unsigned67,68,69,70 and signed prediction errors71 in a range of social contexts and behaviours. rTPJ has also been implicated in tracking the expectation of cooperation of others in public good games72,73. Our results show that this region does indeed signal errors during cooperative interactions but go beyond classical accounts in several ways.
Specifically, we show that when monitoring another's behaviour its activity covaried with the Bayesian update of the expectation about the co-player's intention to cooperate, thus quantifying the prediction error in a social interaction. This prediction error was signalling the discrepancy between how cooperative someone else had behaved, compared to an expectation of the degree of cooperativeness they would exhibit. In addition, we distinguished between signed and unsigned prediction errors localized to distinct sub-regions within the TPJ, the prTPJ and arTPJ respectively. Lastly, we showed that the signed prediction error signalling in the arTPJ also correlated with subsequent changes in how cooperative a person would be in the next trials. All of these findings converge on the notion that the TPJ plays an important role when flexibly adapting one's behaviour during social cooperative interactions, an important component of mentalizing.
However, our findings dissociate the contributions of TPJ sub-regions, with the prTPJ signalling how surprised one is at another's competitiveness and the arTPJ further translating this into a directionally specific code that is used to shift one's own cooperativeness in the future. This organisation affords the flexibility for the prediction error to be attributed directly to the process of extracting others' intentions and be used to help the player select the optimal response in subsequent trials. Such an interpretation accords with anatomical evidence that the TPJ contains distinct sub-regions that have distinct functional roles. The two clusters we identified in the TPJ overlap with two distinct regions as identified with resting-state MRI and diffusion-weighted parcellations74. Although there has been some suggestion that the prTPJ is the sub-region most strongly associated with social cognition75, by using a more refined, continuous task, and a Bayesian model, we show that both sub-regions may compute important information for social cognition. Previous studies have shown the prTPJ to signal prediction errors during iterative economic games37 and when evaluating how trustworthy another's advice is32. However, such studies did not have a task where competitiveness occurred along a continuum, nor distinguish between signed and unsigned prediction errors.
Work examining computations outside of social cognition has highlighted the importance of distinguishing between signed and unsigned prediction errors54,76,77. Unsigned prediction errors are crucial for signalling the salience and thus importance for attending to information, but do not carry valence information that is useful for adapting behaviour52,53,78. In contrast, signed prediction errors may be important for subsequently updating behaviour, up-regulating or inhibiting behaviours that did or did not lead to a desired outcome. Such signals have previously been dissociated in non-social tasks, with signed prediction errors in medial frontal cortex putatively important in updating models and expectations of future events79. However, this distinction has rarely been made in social cognition research. Here, we show dissociable signed and unsigned prediction errors in discrete TPJ zones that enable someone to attend and flexibly update behaviour across different social contexts, based on how much more cooperative or competitive a person was than expected.
In addition to the TPJ, we also found sub-regions of the medial frontal, cingulate and paracingulate cortex previously linked to social cognition that encoded several other features of our model and behaviour29,37,38,42. The findings indicate roles across the medial frontal cortex for carrying information about one's social biases and adaptability to others' competitiveness, as well as shifting responses depending on the social context. In particular, we found a cluster in the posterior DMPFC extending inferiorly to the cingulate cortex where individual changes in cooperation levels where encoded differently between the cooperative and competitive contexts. We also found a region in the anterior dmPFC—putatively in paracingulate cortex (PaCg)—that signalled a social prediction error when monitoring another's player, which was also linked to the updating of one's behaviour along the cooperation-competition continuum as well as correlating with variability in the social bias towards cooperation or competition across contexts.
Such a role aligns with work implicating this region in the processing of social influence80 as well as classical research implicating this region in mentalizing processes. Previous work has shown this region signals prediction errors when shifting one's preferences to align with other peoples36,80,81,82, contains individual neurons which signal when others' behaviour is erroneous47. Moreover, individual differences in activity of this region have also been linked to the degree to which one conforms to social norms in economic decisions83. In addition, a plethora of classic research shows that this region is involved in processing and inferring others mental states during false-belief tasks, and computational tasks where one processes levels of trust in others and when processing others' actions during economic games84,85,86.
Our results therefore support an emerging view of the paracingulate cortex as playing important roles in processing others' intentions, with variation in its response linked to variability in the extent to which people choose to shift their behaviour in responses to others. However, here we show that responses in this region may be involved in inferring others' intentions, and updating those predictions through prediction errors, when deciding how much more cooperative or competitive to be. Moreover, they suggest that this region multiplexes several different pieces of information that influence one's position along an axis, including one's bias towards cooperation, the influence of the social context and inferences about the intentions of others.
In addition to the paracingulate, we also found context-dependant signals encoding social prediction errors in a neighbouring region lying in the ACCg. Specifically, we found that, similarly to PaCG, ACCg activity correlated with the sign of the prediction error in a way which was modulated by the social context, signalling increases of cooperation in the competitive context and increases of competition in the cooperative one. ACCg activity was also linked to the updating of one's behaviour in the next trial as well as correlating with individual variability in the social bias parameter of our model, capturing the intrinsic propensity of being more or less cooperative. There is growing evidence that this region is engaged when processing specifically social information32,38,44,87, and particularly in signalling predictions when expectations about others are violated43,45,46,88 or tracking the likelihood of defecting cooperation in a public goods game72. Our results support the notion that the ACCg carries social prediction error signals, however, existing work had typically shown that these signals were important for learning through observation, for correcting others mistakes, for identifying others' erroneous predictions or for identifying whether to trust another. Here we show that such signals also demonstrate how unexpected someone else's degree of competitiveness is, and that such signals change depending on whether the social context is one that favours cooperation or competition. Thus, our results suggest that social prediction errors in the ACCg may be used to understand how motivated another person is to obtain benefits for themselves38, but does so in manner that differs depending on the social environment and that correlates with future changes in behaviour.
Over the last decade there has been a shift in focus towards identifying the computational mechanisms that guide social behaviour89. Much of this work has begun to show that models based around reinforcement learning and Bayesian principles may provide the framework that scaffolds social information processing. Previous work has identified social prediction errors that underlie prosocial behaviour90, teaching43, trust32, mentalizing37, false belief processing45,91 and a range of other processing requiring social learning12,35,92. Our results concord with the notion that Bayesian principles and prediction errors can guide social behaviour and socio-cognitive processes.
In this work, we developed an economic game that generalizes the Prisoner's Dilemma49,50 into a continuous measure and reproduce a well-known economic principle of locational equilibrium in duopoly described in the Hotelling law48. On a broader level this work converges two lines of research exploring social behaviour, work in behavioural economics using economic games20,93,94, with those arguing that human decision-making may be best understood with approaches from foraging theory95,96. In fact, the task could also be framed as a foraging problem where one had to position oneself in a location. The co-player was likely to be treated as a potential 'predator' in the competitive context when the reward of a player correspond to a loss for the co-player, but not in the cooperative one, with such behavioural flexibility linked to mPFC responses. Such findings relate to research which has suggested that activity in several mPFC sub-regions may be encoding the proximity of threats in the environment. Our results somewhat concord with this notion, but suggest that rather than proximity to threat, several mPFC regions are involved in integrating one's overall preference for cooperativeness, the changes in social context and information about the actions of the other player. All of these signals are necessary to identify where a player will position themselves, as well as being processes responsible for judging where the other player will position themselves. As such, these regions may be engaged when potentially close to threats by processing information that allows one to adapt behaviours accordingly.
Ultimately, our paradigm allows exploring how the social context changes the involvement of the neural network involved in arbitrating the competition-cooperation trade-off. Future experiments using this paradigm might help to probe further hypotheses. For example, future studies might address questions relating to the impact of uncertainty, for instance varying the reward probability of each location (making the reward location— to an extent – predictable through uni/multimodal distribution instead of a uniform one), the difficulty of the task (speeding up or making it harder to make a certain choice) or the social dynamics (increasing the number of players or the distribution of the rewards among them).
In conclusion, we used a new economic game—the Space Dilemma—that allows people to be cooperative or competitive along a continuum. We show that people's level of cooperation is dependent on several sources of information, including the behaviours favoured by the structure of the environment, their own biases towards cooperation, and online updating based on the competitiveness of another player. We show that such behaviour can be approximated by a Bayesian learner, including parameters that scaled each of these features impacting behaviour, with signals in the TPJ, mPFC, ACCg and PaCg—regions previously implicated in social cognition—processing the information that guided behaviour and signalling social prediction errors when monitoring the other player's competiveness. These findings shed light on the multiple features that guide how cooperative we want to be, and how we shift our behaviour along a continuum.
The study complied with all relevant ethical regulations. The study protocol was approved by the Institute of Neuroscience and Psychology Ethics Committee at the University of Glasgow. Written informed consent was obtained in accordance with the Institute of Neuroscience and Psychology Ethics Committee at the University of Glasgow. Twenty-seven same-sex pairs of adult human participants participated in the fMRI experiment. This number was determined based on a priori estimates of sample size necessary to ensure replicability on a task of similar length97. All were recruited from the participants' database of the department of Psychology at the University of Glasgow. For each couple one participant was in the scanner and the other in an adjacent room. Two pairs were removed from the analysis: one for excessive head movements inside the scanner, the other for a technical problem with the scanner. The remaining couple of participants (7 of males, 18 of females), were all right handed, had normal or corrected-to-normal vision and reported no history of psychiatric, neurological or major medical problems, and were free of psychoactive medications at the time of the study.
Stimuli and behavioural task
All participants played the Space Dilemma in pairs of two. Before starting the game they were given a set of instructions explaining that they had to imagine that they were foraging for food in a territory and asked to make a prediction about the position of the food (a straight line that represents the territory, Fig. 1). They were told that in each trial the target "food" would appear somewhere in the territory as its position is randomly sampled from a predefined uniform probability distribution. They were shown examples of possible outcomes of a trial (Fig. 1) and they were given information about the conditions of the game. During the game, in each trial, they were presented with a bar moving across the space (representing their location) and asked to commit to a location by pressing a button while the bar passes through it while moving in the linear space. Participants therefore choose their locations in the space through the timing of a button press. They indicated their choice by pressing one of three buttons on a response box. The bar takes 4 s to move from one end to the other end of the space. Once stopped, it remains at the chosen location for the remainder of the 4 s. This location signalled their prediction about the target position. The two participants played simultaneously, making first their predictions and then watching the other player's responses (for 1–1.5 s). After both players had responded, the target would be shown (for 1.5 s). Inter-trial intervals were 2–2.5 s long. At any trial, the participant who made the best prediction (minimising the distance d to the target) was indicated as the trial's winner through the colour of the target, obtaining a reward which would depend on the distance to the target: the shorter the distance the higher the reward. In the rare circumstance where players were equidistant from the target such reward was split in half between the two players who were both winners in the trial.
In order to enforce different social contexts we introduced a reward distribution rule whereby each trial reward would be shared between the winner and the loser according to the rule
$$R=(1-\min (d))$$
$${R}_{{win}}=\alpha R; \, {R}_{{lose}}=\left(1-\alpha \right)R$$
Where α is a trade-off factor controlling the redistribution between winners and losers in each trial. By redistributing the reward between winner and loser the latter would also benefit from the co-player minimising their distance to the target. Increasing the amount of redistribution (decreasing α below 1) constitutes an incentive to work out a cooperative strategy to decrease the average distance of the winner from the target (that is, irrespective of who the winner is) and therefore increase the reward available in each trial which would be redistributed. Decreasing the amount of redistribution can instead lead to punishment for the losers (increasing alpha above 1) adding an incentive to compete to win the trial.
All participants first participated in a behavioural session where they were randomly coupled with one another and played three sessions of the game in three different conditions specified by the value of the trade-off factor α. In the first condition (α = 0.5, cooperative condition), the reward was shared equally between the two players, irrespective of the winner. In the second condition, the winner gets twice the amount of the reward (α = 2, competitive condition), while the other player will lose from their initial stock an amount equivalent to the reward. In the third condition, the winner will get the full amount of the reward and the other will get nothing (α = 1, intermediate condition). The participants were instructed about the different reward distribution (through a panel similar to Fig. 2c). In total, participants played 60 trials in each of the three conditions for a total of 180 trials.
At the end of the behavioural session, participants were then asked to fill in a questionnaire where their understanding of the game was assessed together with their social value orientation98. If they showed to have understood the task and were eligible for fMRI scanning they were later invited to the fMRI session which occurred 1–3 weeks later. In total, 81 participants took part in the behavioural session and 54 participated to the fMRI session.
In the fMRI sessions, participants were matched with an unfamiliar co-player they had not played with in the behavioural session and it was emphasised not to assume anything about their behaviour in the game. We did not use deception: participants briefly met before the experiment when a coin toss determined who would go into the scanner and who would play the game in a room adjacent to the fMRI control room. Both in the behavioural and fMRI session participants were rewarded according to their performance in the game, with a fixed fee of £6 and £8 respectively and an additional amount of money based on their task performance of up to additional £9. At the end of the fMRI sessions, participants were asked to describe what their strategy was in the different social context. Their response revealed a good understanding of the social implication of their choices (Supplementary Table 4). Both in the behavioural and fMRI sessions, the order of the condition was kept constant (cooperation-competition-intermediate) as we wanted all couples to have the same history of interactions.
Visual stimuli were generated from client computers using Presentation software (Neurobehavioral Systems) controlled by a common server running the master script in MATLAB. The stimuli were presented to the players simultaneously. Each experiment was preceded by a short tutorial where players could experience a few trials in each of the three sessions to allow probing the effect of the variability in the task parameter.
Payoff matrix
We computed a payoff matrix for the Space Dilemma in the following way. Since the target position in each trial is random, the reward in each trial will also be random, but because the target position is sampled from a uniform distribution, each position in the space is associated with an expected payoff which depends on the position of the other player (Fig. 1b). In a two-player game, the midpoint maximizes the chance of winning the trial. For simplicity we therefore assume that players can either compete, positioning in the middle of the space and maximizing their chance of winning, or cooperate, deviating from this position by a distance Δ to sample the space and maximize the dyad's reward. For all combinations of competitive and cooperative choice, we can build an expected (average) payoff matrix which depends parametrically on Δ. We defined R as the expected reward for each of two players cooperating with each other, T as the expected temptation payoff for someone who decides to compete against a player who is cooperating. S is the "sucker" payoff for a cooperator betrayed by its partner. P is the punishment payoff when both players compete all the times. R, T, S and P can be computed analytically integrating over all possible position of the target and are equal to:
$$R=\left(\frac{3}{8}+\frac{\triangle }{2}-{\triangle }^{2}\right)$$
$$T=\alpha \left(\frac{3}{8}+\frac{\triangle }{2}-\frac{{\triangle }^{2}}{8}\right)+\left(1-\alpha \right)\left(\frac{3}{8}-\frac{5{\triangle }^{2}}{8}\right)$$
$$S=\alpha \left(\frac{3}{8}-\frac{5{\triangle }^{2}}{8}\right)+\left(1-\alpha \right)\left(\frac{3}{8}+\frac{\triangle }{2}-\frac{{\triangle }^{2}}{8}\right)$$
$$P=\frac{3}{8}$$
The expected reward for cooperative players R is the same in all conditions. This is because the expected reward is equal to the average of the possible rewards associated with win and loss and players who cooperate with equal Δ have an equal chance of winning the trial.
Therefore \(R=({R}_{{win}}{+R}_{{lose}})/2=(\alpha {R}_{{trial}}+\left(1-\alpha \right){R}_{{trial}})/2={R}_{{trial}}\)/2 which does not depend on α. Likewise for the expected reward for competitive players P. When one player cooperates and the other competes however, players don't have the same chance of winning a trial and therefore T and S depend also on α. For α = 0.5 the reward is shared equally no matter what players do so if one compete against a cooperator, they both are expected to win:
$$T=S=\frac{3}{8}+\frac{\triangle }{4}-\frac{{3\triangle }^{2}}{8}$$
For α = 2, T diverges quickly from S as
$$T-S=\frac{3}{2}\left(\triangle+{\triangle }^{2}\right)$$
We also computed the expected payoff by simulating 10000 trials of two players competing and/or cooperating by Δ in the three conditions of the game and the results matched the analytical solutions. For the intermediate and competitive conditions, for all values of Δ it is also true that (T > R > P > S) thus demonstrating that the Space Dilemma in these conditions is a continuous probabilistic form of Prisoner's Dilemma in the strong sense. For Δ > 0.4 and in all conditions the payoff for a dyad always cooperating is always higher that for one where one player is always competing and other always cooperating or if both alternate cooperation and competition (2 R > T + S), therefore for Δ > 0.4 the space dilemma is a probabilistic form of iterated prisoner's dilemma. Furthermore, for all conditions the maximum payoff for the dyad is reached for Δ = 0.25.
To model the behaviour in the game we fitted eighteen different models belonging to three different classes all assuming that players implement some sort of "titxtat". The first class of models (Model S1-S4) is based on the assumption that players decide their behaviour simply based on the last observed behaviour of their counterpart, by reciprocating either their last position, their last change in position, or a combination of the two. A second class of models goes further in assuming that a player learns to anticipate the co-player's position in a fashion that is predicted quantitatively by a Bayesian learner ("Bayesian models" in B1-B8). The eight Bayesian models differ in how this expectation is mapped into a choice, allowing for different degrees of influence of the context, their counterpart behaviour and the player own bias. A third class of models assumes that participants were choosing what to do based not only on the other player behaviour but also on the outcome of each trial, with different assumptions on how winning a trial should change their behaviour in the next (becoming more or less cooperative). This class of models were effectively assuming that the player behaviour would be shaped by the reward collected ("Reward models" in Fig. 3d).
For simplicity, we remapped positions in the space to a cooperation space so that choosing the midpoint (competitive position) would correspond to minimum cooperation while going to the extreme ends of the space (either x = 0 or x = 1) would correspond to maximum cooperation. Therefore θ is symmetrical to the midpoint and is defined as
$$\theta=\left|x-0.5\right|/0.5\,({{{{{\rm{S}}}}}}1-{{{{{\rm{S}}}}}}4,\, {{{{{\rm{B}}}}}}1-{{{{{\rm{B}}}}}}8,\, {{{{{\rm{R}}}}}}1-{{{{{\rm{R}}}}}}6)$$
All models include a precision parameter capturing intrinsic response variability linked to sensory-motor precision of the participant, such that, given each model's prediction about the player's decision, the actual choice will be normally distributed around that prediction with standard deviation equal to the inverse of the precision parameter, constrained to be in the range (0:10000).
For models S1-S4, we assumed that participants were simply reacting to their counterpart recent choice. Model S1 simply assumed that players would attempt to reciprocate their co-player's level of cooperation θ. As the model operate in a symmetrical cooperation space this implies matching their expected level of cooperation in the opposite hemifield.
$${choice}\left(t\right) \sim N\,\left(\theta \left(t-1\right){{{{{\rm{;}}}}}} \, 1/{{{{{\rm{Precision}}}}}}\right)({{{{{\rm{S}}}}}}1)$$
Model S2 simply assumed that players would attempt to reciprocate their co-player's updates in their level of cooperation θ moving from their previous position plus a fixed SocialBias parameter, capturing their "a priori" desired level of cooperation, constrained to be in the range (−1000:1000).
$${choice}\left(t\right) \sim N\,\left({{{{{\rm{SocialBias}}}}}}+{choice}\left(t-1\right)+\triangle \theta (t-1){{{{{\rm{;}}}}}} \,1/{{{{{\rm{Precision}}}}}}\right)({{{{{\rm{S}}}}}}2)$$
Model S3 was identical to model S2 with the only difference of having three different SocialBias parameters, one for each social context. Model S4 simply assumed that players would reciprocate their co-player's last level of cooperation θ scaled by a TitXtat multiplicative parameter, constrained to be in the range (0:2). If this is bigger than 1, a participant would cooperate more than the counterpart.
$${choice}\left(t\right) \sim N\,\left({{{{{\rm{SocialBias}}}}}}+{{{{{\rm{TitXTat}}}}}} * \theta \left(t-1\right){{{{{\rm{;}}}}}} \, 1/{{{{{\rm{Precision}}}}}}\right)({{{{{\rm{S}}}}}}4)$$
For models B1-B8, we used a Bayesian decision framework that has been shown to explain how humans learn in social contexts very well32,99 for modelling how participants made decisions in the task and how the social context (reward distribution) can modulate these decisions. Our ideal Bayesian learner was assumed to update its expectation about the co-player's level of cooperation θ on a trial by trial basis by observing the position of its counterpart. In our Bayesian framework, knowledge about θ has two sources: a prior distribution P(θ) on θ based initially on the social context and thereafter on past experience and a likelihood function P(D│θ) based on the observed position of the counterpart in the last trial. The product of prior and likelihood is the posterior distribution that defines the expectation about the counterpart's position in the next trial:
$$P\left(\theta \left(t+1\right)\right)=P(\theta (t+1)|D)=\frac{\left(P\left(D \right|\theta \left(t\right)\right) * P(\theta (t))}{P(D)}\,({{{{{\rm{B}}}}}}1-{{{{{\rm{B}}}}}}8)$$
According to Bayesian decision theory (Berger, 1985; O'Reilly et al., 2013), the posterior distribution P(θ│D) captures all the information that the participant has about θ. In the first trial of a block, when players have no evidence on past position of the co-players, we chose normal priors that correspond to the social context: in the competition context μprior = 0, in the cooperation context, μprior = 1, and in the intermediate context where the winner takes all, μprior = 0.5, whereas in all cases the standard deviation is fixed to σprior = 0.05 which heuristically speeds up the fit. The likelihood function is also assumed to be a normal distribution centred on the observed location of the co-player with standard deviation fixed to the average variability in positions observed so far in the block (that is, in all trials up to the one in which is estimated). Being the product of two Gaussian distribution the posterior distribution is also Gaussian. All distributions are computed for all values of the linear space at a resolution of dθ = 0.01.
While all Bayesian models assume that players update their expectations about the co-player choices, they differ in how the translate these expectations into their own choices. We built 8 Bayesian models based on increasing level of complexity. In short, all models include a Precision parameter. Model B1 simply assumes that players will aim to reciprocate the expected position of the co-player (coplayer_exp_pos).
$${coplayer}\_{\exp }\_{pos}\,(t)=E\left(P\left(\theta (t)\right)\right)({{{{{\rm{B}}}}}}1-{{{{{\rm{B}}}}}}8)$$
$${choice}\left(t\right) \sim N\,\left({coplayer}\_{\exp }\_{pos}\,\left(t\right){{{{{\rm{;}}}}}} \, 1/{{{{{\rm{Precision}}}}}}\right)({{{{{\rm{B}}}}}}1)$$
Model B2 assumes that players will aim for a level of cooperation shifted compared to coplayer_exp_pos. Such a shift is captured by the SocialBias parameter which sets an "a priori" tendency to be more or less cooperative and all further Bayesian models include it.
$${choice}\left(t\right) \sim N\,({coplayer}\_{\exp }\_{pos}\,\left(t\right)+{{{{{\rm{SocialBias;}}}}}} \, 1/{{{{{\rm{Precision}}}}}}) \, ({{{{{\rm{B}}}}}}2)$$
Model B3 further assumes that participants can fluctuate in how much they reciprocate their co-player cooperation. This effect is modelled multiplying coplayer_exp_pos by a TitXTat parameter.
$${choice}\left(t\right) \sim N\,({{{{{\rm{TitXTat}}}}}} * {coplayer}\_{\exp }\_{pos}\,\left(t\right)+{{{{{\rm{SocialBias;}}}}}} \, 1/{{{{{\rm{Precision}}}}}}) \, ({{{{{\rm{B}}}}}}3)$$
Model B4 further assumes that players keep track of the target position, updating their expectations after each trial in a similar way as they keep track of the co-player position, with a Bayesian update. They then decide their level of cooperation based on the prediction of Model 3 plus a linear term that depends on the expected position of the target scaled by a TargetBias parameter. As the target was random we did not expect this model to significantly increase the fit compared to Model 3.
$${choice}\left(t\right) \sim N\,(T{itXTat} * {coplayer}\_{\exp }\_{pos}\,\left(t\right)+{{{{{\rm{SocialBias}}}}}} \\ +{{{{{\rm{TargetBias}}}}}} * \left(P\left({x}_{{target}}\right)\right){{{{{\rm{;}}}}}} \, 1/{{{{{\rm{Precision}}}}}}) \, ({{{{{\rm{B}}}}}}4)$$
Model B5 further assumes that participants modulate how much they are willing to reciprocate their co-player behaviour based on the social risk associated to the context. In this model the TitXtat takes the form of a multiplying TitXTat factor
$${TitXTat\; factor}=\frac{1}{1+q\_{risk} * {social}\_{risk}}\,({{{{{\rm{B}}}}}}5)$$
$${choice}\left(t\right) \sim N({TitXTat\; factor} * {coplayer}\_{\exp }\_{pos}\left(t\right)+{{{{{\rm{SocialBias}}}}}} \\ +{{{{{\rm{TargetBias}}}}}} * \left(P\left({x}_{{target}}\right)\right){{{{{\rm{;}}}}}} \, 1/{{{{{\rm{Precision}}}}}}) \, ({{{{{\rm{B}}}}}}5)$$
Where q_risk is a parameter capturing the sensitivity to the social risk induced by the context, which is proportional to the redistribution parameter α:
$${social\; risk}=2\,\alpha -1\,({{{{{\rm{B}}}}}}5-{{{{{\rm{B}}}}}}8)$$
Model B6, B7 and B8 do not include the target term. They all model the TitXtat factor with two parameters as in
$${TitXTat\; factor}=\frac{{TitXTat}}{1+{q\_risk} * {social\_risk}} \, \left({{{{{\rm{B}}}}}}6-{{{{{\rm{B}}}}}}8\right)$$
$${choice}\left(t\right) \sim N\left({{{{{\rm{TitXTat\; factor}}}}}} * {coplayer}\_{\exp }\_{pos}\left(t\right){{{{{\rm{;}}}}}}\,1/{{{{{\rm{Precision}}}}}}\right)({{{{{\rm{B}}}}}}6-{{{{{\rm{B}}}}}}8)$$
Model B7 and B8 further assume that participants estimate the probability that their co-player will betray their expectations and behave more competitively than expected. This is computed updating their betrayal expectations after each trial in a Bayesian fashion using the difference between the observed and expected position of the co-player to update a distribution over all possible discrepancies. This produces, for each trial, an expected level of change in the co-player position. Model B7 and B8 both weigh this 'expected betrayal' with a betrayal sensitivity parameter and add this 'betrayal term' either to the social risk, increasing it by an amount proportional to the expected betrayal (model B7) or to the choice prediction, shifting it towards competition by an amount proportional to the expected betrayal (model B8). Model B6 does not include any modelling of the betrayal.
For models R1-R6, we assumed that participants were simply adjusting their position based on the feedback received in the previous trial. Model R1 assumed that after losing, players would become more competitive and after winning, more cooperative. These updates in different directions would be captured by two parameters Shiftwin and Shiftlose both constrained to be in the range (0:10).
$$ch{oice}\left(t\right) \sim N(ch{oice}(t-1)\pm {Sh{ift}}_{({win},{lose})}{{{{{\rm{;}}}}}} \, 1/{Precision}) \, ({{{{{\rm{R}}}}}}1)$$
Model R2 assumed that after losing, players would shift their position in the opposite direction than they did in the previous trial, while after winning, they would keep shifting in the same direction. These updates in different directions would be captured by two parameters Shiftwin and Shiftlose both constrained to be in the range (0:10).
$$ch{oice}(t) \sim N(ch{oice}(t-1)\pm {Sh{ift}}_{\left(\right.{win},{lose},{sign}(\triangle ch{oice}(t-1))}; \, 1/{Precision}) \, ({{{{{\rm{R}}}}}}2)$$
Model R3 and R4 are similar to model R1 and R2 in how they update the position following winning or losing but now players would also take into account their co-players last level of cooperation θ scaled by a TitXtat multiplicative parameter and their own "a priori" tendency to be more or less cooperative captured by a SocialBias parameter.
$$ch{oice}\left(t\right) \sim N({{{{{\rm{SocialBias}}}}}}+{{{{{\rm{TitXTat}}}}}} * \theta \left(t-1\right)\pm {Sh{ift}}_{\left({win},{lose}\right)}{{{{{\rm{;}}}}}} \, 1/{Precision}) \, ({{{{{\rm{R}}}}}}3)$$
$$choice(t) \sim N({{{{{\rm{SocialBias}}}}}}+{{{{{\rm{TitXTat}}}}}} * \theta (t - 1) \\ \pm {Shift}_{\left(\right.{win},{lose},{sign}(\triangle choice(t - 1))}; \, 1/{Precision}) \, ({{{{{\rm{R}}}}}}4)$$
Model R5 and R6 are identical to model R1 and R2 with the only difference of fitting each choice using the actual value of the previous choice made by the players rather than its fitted value (to prevent under fitting because of recursive errors).
We fit all models to individual participant's data from all three social contexts using custom scripts in MATLAB and the MATLAB function fmincon. Log likelihood was computed for each model by
$${LL}\left({model}\right)=\mathop{\sum}\limits_{{subjects}}\mathop{\sum}\limits_{t}{LL}({choice}(t))$$
$${LL}({choice}(t))={\log }\left( \sqrt{\frac{{Precision}}{2\pi }} * {{\exp }}\left(\right.-0.5 * {(({{{{{\rm{choice}}}}}}({{{{{\rm{t}}}}}})-{{{{{\rm{prediction}}}}}}({{{{{\rm{t}}}}}})) * {Precision})}^{2}\right.$$
We compared models computing the Bayesian information Criterion
$${BIC}\left({model}\right)=k\log \left(n\right)-2 * {LL}({model})$$
where k is the number of parameters for each model and n = number of trials * number of participants.
All Bayesian models significantly outperformed both the simple reactive models and the rewards-based ones. To validate this modelling approach and confirm that players were trying to predict others' positions rather than just reciprocating preceding choices, we ran a regressions model to explain participants' choices based on both the last position of the co-player and its Bayesian expectation in the following trial (see supplementary figure 6b).
The winning model is B6, a Bayesian model that contained features that accounted for both people's biases towards cooperativeness, how the behaviour of the other player influenced subsequent choices and the influence of the social context. For this model, participants choose where to position themselves in each trial based on (21), (22) and (23).
Precision, SocialBias, TitXTat, q_risk are the four free parameters of the model. Notice that TitXTat is a parameter capturing the context-independent amount of titXtat which is then normalised by the context-dependant social risk.
Model parameter recovery analysis
We assessed the degree to which we could reliably estimate model parameters given our fitting procedure. More specifically, we generated one simulated behavioral data set (i.e., choices for an interacting couple for 60 trials in three different social contexts) using the average parameters estimated originally on the real behavioral data. Additionally we generated five more simulated behavioral data sets using five randomly sampled parameter sets from the range used in the original fit. For each simulated behavioral data set we ran the winning model B6 this time trying to fit the generated data and identify the set of model parameters that maximized the log-likelihood in the same way we did for original behavioral data. To assess the recoverability of our parameters we repeated this procedure 10 times for each simulated data set (i.e., 60 repetitions). The recoverability of the parameters was high in almost all cases as can be seen in Supplementary Fig. 6c.
Model-based regressors
The Bayesian framework allowed us to derive how counterparts' position influenced participants' initial impressions of the level of cooperation needed in a given context. Given this framework, we measured how much the posterior distribution over the co-player position differs from the prior distribution. We did so by computing, for each trial, the Kullback–Leibler divergence (KLD) between the posterior and prior probability distribution over the co-player response. This absolute difference formally represents the degree with which P2 violated P1's expectation and is a trial-by-trial measure of a "social prediction error" that triggers a change in P1's belief, guiding future decisions. A greater KL divergence indicates a higher cooperation-competition update. We, therefore, estimated a social prediction error signal by computing the surprise each player experienced when observing the co-player position, based on its current expectation. In the following equation, where p and q represent respectively prior and posterior density functions over the co-player position, the KL divergence is given by:
$${KLD}\left(p,\, q\right)=-\int p\left(x\right)\log q\left(x\right){dx}+\int p\left(x\right)\log p\left(x\right){dx}=\int p\left(x\right)\left(\right.\log (p\left(x\right)-\log q\left(x\right)){dx}$$
KLD is vital in our fMRI investigation as it provides an integrated measure of the trial-by-trial change that accounts for both the uncertainty about the social context and the dynamic of the opponent. As the KL divergence measures a distance between distributions, it is by definition non negative. Therefore it does not provide information about the direction of change between the distributions. We can think of it as an unsigned prediction error capturing the strength of the update. To capture the direction of change we also compute its sign by
$${KLDsign}\left(p,\, q\right)=\left\{\begin{array}{c}1\,{if}\int {x\; q}\left(x\right){dx} \, > \int {x\; p}\left(x\right){dx}\\ -1\,{otherwise}\end{array}\right.$$
Therefore we consider a positive KLDsign if the co-player is more cooperative than expected and therefore, after observing the co-player behaviour, the co-player is expected to be more cooperative in the next trial.
These estimates are fundamental to identify the brain areas that covary with the extent and the directionality with which participants update their expectation about their counterparts' strategies given the social context. For this, KLD and KLDsign were used as a parametric regressors in the fMRI analysis.
MRI data collection
We acquired the fMRI data using a 3T Philips Achieva MRI scanner (Philips, Netherlands). Specifically, we collected functional Echo-Planar-Imaging (EPI) data using a 32-channel SENSE head coil with an anterior–posterior fold over direction (SENSE factor: 2.3; repetition time: 1.5 s; echo time: 40 ms; number of slices: 40; number of voxels: 68 × 68; in-plane resolution: 3 × 3 mm; slice thickness: 3 mm; flip angle: 80°). Slices were collected in an interleaved order. Altogether, we collected three separate runs of 450 volumes each, corresponding to three blocks of 60 trials each for a total of 180 trials in the main experimental task. A pair of participant saw their first scan interrupted after 193 volumes (30 trials) for a technical problem. Another pair of participant was run with a TR of 3 s by mistake. Anatomical images were acquired using a MPRAGE T1-weighted sequence that yielded images with a 1 × 1 × 1 mm resolution (160 slices; number of voxels: 256 × 256; repetition time: 8.2 ms; echo time: 3.7 ms). We also acquired a B0 map using a multi-shot gradient echo sequence which was subsequently used to correct for distortions in the EPI data due to B0 inhomogeneities (echo time: 2.3 ms; delta echo time: 5 ms; isotropic resolution: 3 mm; matrix: 68 × 68 × 32; repetition time: 383 ms; flip angle: 90°).
fMRI pre-processing
These volumes were used for the statistical analysis presented in this study. Pre-processing of our data was performed using the FMRIB's Software Library (Functional MRI of the Brain, Oxford, UK) and included: head-related motion correction, slice-timing correction, high-pass filtering (>100 s), and spatial smoothing (with a Gaussian kernel of 8 mm full-width at half maximum). To register our EPI image to standard space, we first transformed the EPI images into each individual's high-resolution space with a linear six-parameter rigid body transformation. We then registered the image to standard space (Montreal Neurological Institute, MNI) using FMRIB's Non-linear Image Registration Tool with a resolution warp of 10 mm.
fMRI analyses
We performed whole-brain statistical analyses of functional data using a multilevel approach within the generalized linear model (GLM) framework, as implemented in FSL through the FEAT module:
$$Y=X\beta+\varepsilon={{\beta }_{1}X}_{1}+{{\beta }_{2}X}_{2}+{\ldots+{\beta }_{N}X}_{N}+\varepsilon$$
where Y is a T × 1 (T time samples) column vector containing the times series data for a given voxel, and X is a T × N (N regressors) design matrix with columns representing each of the psychological regressors convolved with a hemodynamic response function specific for human brains100,101. β is a N × 1 column vector of regression coefficients and ε a T × 1 column vector of residual error terms. Using this framework we initially performed a first-level fixed effects analysis to process each individual experimental run which were then combined in a second-level mixed-effects analysis (FLAME 1 + 2) treating session as a random effect, and a third level to combine data across subjects, treating participants as a random effect. (We had the same number of sessions across participants). For all analysis, we performed a cluster inference using a cluster-defining threshold of |Z| > 3.1 with a FWE-corrected threshold of P = 0.001. Time series statistical analysis was carried out using FMRIB's improved linear model with local autocorrelation correction. Applying this framework, we performed the GLMs highlighted below.
GLM 1
Our first GLM model included four unmodulated stick regressors aligned with (i) the beginning of the trial (TRIAL in Supplementary Table 2) (ii) the player response (PR) (iii) the time at which the response of its opponent was revealed (OR) (iv) the time at which the target appeared (TARGET). Additionally, we included six regressors capturing trial-by-trial specific information: (1) a stick function at (i) the beginning of the trial parametrically modulated by the expected position of the co-player as derived through the prior distribution for that trial obtained from the Bayesian model (PriorPos). (2) a stick function at (ii) response time modulated by trial by trial changes in the level of cooperation chosen by the player (Pcoop). (3 and 4) two stick functions at (iii) the time at which the response of the co-player was revealed parametrically modulated respectively by the value of the KL divergence between prior and posterior computed in that trial (absPE) and its sign (signPE). The latter could only take the value +1 and −1. Finally (5 and 6) two stick functions at (iv) the time at which the target appeared parametrically modulated respectively by the value of the reward allocated in the trial (Rew) and one signalling whether the player won or lose (Win). The latter could only take the value +1 and −1. All parametrically modulated regressors were z-scored.
Our second GLM model included a single boxcar covering the duration of the trial from its onset to the target appearance.
For both GLM we looked both at the average across the three contexts and the contrast between the competitive and cooperative context.
Our third GLM model was identical to GLM 1 in all respects but the regressors encoding the prediction error. Here, in order to capture the full parametric effect of the PE instead of having two parametrical regressors we had four unmodulated regressors for four different trial groupings based on the KLD value and its sign. In short, we binned trials in four group based on their absPE and sign PE value (high positive, low positive, low negative and high negative values). The cut-off value to distinguish high and low prediction errors was set to be the median value across all prediction errors with the same sign. Each of the four regressors was an unmodulated stick regressor aligned with the time at which the response of the co-player was revealed in trials belonging to the corresponding bin.
ROI analysis
To quantify the modulation of the activity across conditions, we extracted the average signal of the neural activation for all three social contexts in regions of interest (ROIs), defined as either three or five-voxel radius spherical masks placed centred on the peak of the activations at the group level. We back projected this masks and extracted individual participant betas. We split each participant's time series into trials resampled each trial to 10 s at a resampling resolution of 50 ms. We then carried out a general linear model across trials at every time point in each participant independently. Lastly, we calculated group average effect sizes at each time point, and their standard errors. To analyse the predictive power of an area, we split trials in two groups based on whether in the next trial the player was more cooperative or more competitive. For each of the two groups, we extracted the time-courses of the BOLD signal in the selected ROI at the time of the other player response and examined whether signals on a trial were predictive of a change in behaviour (an increase or decrease in competitiveness) on trial (t + 1). To test the full parametric effect of the prediction error in the two in clusters in TPJ we computed the average population betas within the two ROIs for each of the four PE regressors of GLM 3, corresponding to four group of trials based on their absPE and sign PE value (high positive, low positive, low negative and high negative values).
The pre-processed fmri and behavioural data generated in this study have been deposited in an Open Science Framework project [https://osf.io/sydea]. The raw fMRI data are protected and are not available due to data privacy laws.
The code to generate the results and the figures of this study is available in an Open Science Framework project [https://osf.io/sydea].
Ellis, P. E. & Free, J. B. Social organization of animal communities. Nature 201, 861–863 (1964).
Fehr, E. & Fischbacher, U. The nature of human altruism. Nature 425, 785–791 (2003).
Article ADS CAS PubMed Google Scholar
De Waal, F. B. M., Leimgruber, K. & Greenberg, A. R. Giving is self-rewarding for monkeys. Proc. Natl Acad. Sci. USA 105, 13685–13689 (2008).
Article ADS PubMed PubMed Central Google Scholar
Stallen, M. & Sanfey, A. G. The cooperative brain. Neuroscientist 19, 292–303 (2013).
Kurzban, R., Burton-Chellew, M. N. & West, S. A. The evolution of altruism in humans. Annu. Rev. Psychol. 66, 575–599 (2015).
Danielson, P. Competition among cooperators: Altruism and reciprocity. Proc. Natl Acad. Sci. USA 99, 7237–7242 (2002).
Article ADS CAS PubMed PubMed Central Google Scholar
Efferson, C. & Fehr, E. Simple moral code supports cooperation. Nature 555, 169–170 (2018).
Fehr, E. & Schmidt, K. M. A theory of fairness, competition, and cooperation. Q. J. Econ. 114, 817–868 (1999).
Rilling, J. K. et al. A neural basis for social cooperation. Neuron 35, 395–405 (2002).
Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E. & Cohen, J. D. The neural basis of economic decision-making in the Ultimatum Game. Science. 300, 1755–1758 (2003).
Fouragnan, E. et al. Reputational priors magnify striatal responses to violations of trust. J. Neurosci. 33, 3602–3611 (2013).
Palminteri, S., Khamassi, M., Joffily, M. & Coricelli, G. Contextual modulation of value signals in reward and punishment learning. Nat. Commun. 6, 8096 (2015).
Connelly, B. D., Bruger, E. L., McKinley, P. K. & Waters, C. M. Resource abundance and the critical transition to cooperation. J. Evol. Biol. 30, 750–761 (2017).
Eso, P., Nocke, V. & White, L. Competition for scarce resources. RAND J. Econ. 41, 524–548 (2010).
Rand, D. G. & Nowak, M. A. Human cooperation. Trends Cogn. Sci. 17, 413–425 (2013).
Hertel, G. & Fiedler, K. Affective and cognitive influences in social dilemma game. Eur. J. Soc. Psychol. 24, 131–145 (1994).
Barreda-Tarrazona, I., Jaramillo-Gutiérrez, A., Pavan, M. & Sabater-Grande, G. Individual characteristics vs. experience: An experimental study on cooperation in prisoner's dilemma. Front. Psychol. 8, 596 (2017).
Proto, E. & Rustichini A. Cooperation and Personality. The Warwick Economics Research Paper Series (TWERPS) 1045 (2013).
Bó, P. D. & Fréchette, G. R. The evolution of cooperation in infinitely repeated games: Experimental evidence. Am. Econ. Rev. 101, 411–429 (2011).
Dreber, A., Rand, D. G., Fudenberg, D. & Nowak, M. A. Winners don't punish. Nature 452, 348–351 (2008).
Axelrod, R. & Hamilton, W. D. The evolution of cooperation. Science 212, 1390–1396 (1981).
Article ADS MathSciNet MATH Google Scholar
Barclay, P. & Willer, R. Partner choice creates competitive altruism in humans. Proc. R. Soc. B Biol. Sci. 274, 749–753 (2007).
Feinberg, M., Willer, R. & Schultz, M. Gossip and Ostracism Promote Cooperation in Groups. Psychol. Sci. 25, 656–664 (2014).
Kraft-Todd, G., Yoeli, E., Bhanot, S. & Rand, D. Promoting cooperation in the field. Current Opinion in Behavioral Sciences 3, 96–101 (2015).
Archetti, M. et al. Economic game theory for mutualism and cooperation. Ecology Letters 14, 1300–1312 (2011).
Nash, J. F. Equilibrium points in n-person games. Proc. Natl Acad. Sci. 36, 48–49 (1950).
Article ADS MathSciNet CAS PubMed PubMed Central MATH Google Scholar
Babichenko, Y. & Rubinstein, A. Communication complexity of approximate Nash equilibria. in Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Part F128415, 878–889 (2017).
Jankowski, R. Punishment in Iterated Chicken and Prisoner's Dilemma Games. Ration. Soc. 2, 449–470 (1990).
Frith, C. D. & Frith, U. The Neural Basis of Mentalizing. Neuron 50, 531–534 (2006).
van den bos, W., van Dijk, E., Westenberg, M., Rombouts, S. A. R. B. & Crone, E. A. What motivates repayment? Neural correlates of reciprocity in the Trust Game. Soc. Cogn. Affect. Neurosci. 4, 294–304 (2009).
King-Casas, B. et al. Getting to know you: Reputation and trust in a two-person economic exchange. Science. 308, 78–83 (2005).
Behrens, T. E. J., Hunt, L. T., Woolrich, M. W. & Rushworth, M. F. S. Associative learning of social value. Nature 456, 245–249 (2008).
Behrens, T. E. J., Hunt, L. T. & Rushworth, M. F. S. The computation of social behavior. Science 324, 1160–1164 (2009).
Bahrami, B. et al. Optimally interacting minds. Science 329, 1081–1085 (2010).
Diaconescu, A. O. et al. Hierarchical prediction errors in midbrain and septum during social learning. Soc. Cogn. Affect. Neurosci. 12, 618–634 (2017).
Zhang, L. & Gläscher, J. A brain network supporting social influences in human decision-making. Sci. Adv. 6, eabb4159 (2020).
Hampton, A. N., Bossaerts, P. & O'Doherty, J. P. Neural correlates of mentalizing-related computations during strategic interactions in humans. Proc. Natl Acad. Sci. USA 105, 6741–6746 (2008).
Apps, M. A. J., Rushworth, M. F. S. & Chang, S. W. C. The anterior cingulate gyrus and social cognition: tracking the motivation of others. Neuron 90, 692–707 (2016).
Amodio, D. M. & Frith, C. D. Meeting of minds: The medial frontal cortex and social cognition. Nat. Rev. Neurosci. 7, 268–277 (2006).
Saxe, R. The right temporo-parietal junction: a specific brain region for thinking about thoughts. in Handbook of Theory of Mind, 1–35 (2010).
Hertz, U. et al. Neural computations underpinning the strategic management of influence in advice giving. Nat. Commun. 8, 2191 (2017).
Frith, U. & Frith, C. The social brain: Allowing humans to boldly go where no other species has been. Philos. Trans. Royal Soc. B: Biol. Sci. 365, 165–176 (2010).
Apps, M. A. J., Lesage, E. & Ramnani, N. Vicarious reinforcement learning signáis when instructing others. J. Neurosci. 35, 2904–2913 (2015).
Dal Monte, O., Chu, C. C. J., Fagan, N. A. & Chang, S. W. C. Specialized medial prefrontal–amygdala coordination in other-regarding decision preference. Nat. Neurosci. 23, 565–574 (2020).
Balsters, J. H. et al. Disrupted prediction errors index social deficits in autism spectrum disorder. Brain 140, 235–246 (2017).
Hill, M. R., Boorman, E. D. & Fried, I. Observational learning computations in neurons of the human anterior cingulate cortex. Nat. Commun. 7, 12722 (2016).
Matsumoto, M., Matsumoto, K., Abe, H. & Tanaka, K. Medial prefrontal cell activity signaling prediction errors of action values. Nat. Neurosci. 10, 647–656 (2007).
Hotelling, H. Stability in Competition. Econ. J. 39, 41–57 (1929).
Nash, J. Non-Cooperative Games. Ann. Math. 54, 286–295 (1951).
Flood, M. M. Some Experimental Games. Manage. Sci. 5, 5–26 (1958).
Fouragnan, E., Retzler, C., Mullinger, K. & Philiastides, M. G. Two spatiotemporally distinct value systems shape reward-based learning in the human brain. Nat. Commun. 6, 8107 (2015).
Article ADS PubMed Google Scholar
Fouragnan, E., Queirazza, F., Retzler, C., Mullinger, K. J. & Philiastides, M. G. Spatiotemporal neural characterization of prediction error valence and surprise during reward learning in humans. Sci. Rep. 7, 4762 (2017).
Hayden, B. Y., Heilbronner, S. R., Pearson, J. M. & Platt, M. L. Surprise signals in anterior cingulate cortex: Neuronal encoding of unsigned reward prediction errors driving adjustment in behavior. J. Neurosci. 31, 4178–4187 (2011).
Rouhani, N. & Niv, Y. Signed and unsigned reward prediction errors dynamically enhance learning and memory. Elife 10, e61077 (2021).
Takahashi, H., Izuma, K., Matsumoto, M., Matsumoto, K. & Omori, T. The anterior insula tracks behavioral entropy during an interpersonal competitive game. PLoS ONE 10, e0123329 (2015).
Polosan, M. et al. An fMRI study of the social competition in healthy subjects. Brain Cogn. 77, 401–411 (2011).
Hillman, K. L. & Bilkey, D. K. Neural encoding of competitive effort in the anterior cingulate cortex. Nat. Neurosci. 15, 1290–1297 (2012).
Špiláková, B., Shaw, D. J., Czekóová, K. & Brázdil, M. Dissecting social interaction: Dual-fMRI reveals patterns of interpersonal brain-behavior relationships that dissociate among dimensions of social exchange. Soc. Cogn. Affect. Neurosci. 14, 225–235 (2019).
Killingback, T., Doebeli, M. & Knowlton, N. Variable investment, the Continuous Prisoner's Dilemma, and the origin of cooperation. Proc. R. Soc. B Biol. Sci. 266, 1723–1728 (1999).
Roberts, G. & Renwick, J. S. The development of cooperative relationships: An experiment. Proc. R. Soc. B Biol. Sci. 270, 2279–2283 (2003).
Capraro, V., Jordan, J. J. & Rand, D. G. Heuristics guide the implementation of social preferences in one-shot Prisoner's Dilemma experiments. Sci. Rep. 4, 6790 (2014).
Decety, J., Jackson, P. L., Sommerville, J. A., Chaminade, T. & Meltzoff, A. N. The neural bases of cooperation and competition: An fMRI investigation. Neuroimage 23, 744–51 (2004).
Tsoi, L., Dungan, J., Waytz, A. & Young, L. Distinct neural patterns of social cognition for cooperation versus competition. Neuroimage 137, 86–96 (2016).
Lissek, S. et al. Cooperation and deception recruit different subsets of the theory-of-mind network. PLoS ONE 3, e2023 (2008).
Jenkins, A. C. & Mitchell, J. P. Mentalizing under uncertainty: Dissociated neural responses to ambiguous and unambiguous mental state inferences. Cereb. Cortex 20, 404–410 (2010).
Koster-Hale, J. & Saxe, R. Theory of Mind: A Neural Prediction Problem. Neuron 79, 836–48 (2013).
Boorman, E. D., O'Doherty, J. P., Adolphs, R. & Rangel, A. The behavioral and neural mechanisms underlying the tracking of expertise. Neuron 80, 1558–1571 (2013).
Dungan, J. A., Stepanovic, M. & Young, L. Theory of mind for processing unexpected events across contexts. Soc. Cogn. Affect. Neurosci. 11, 1183–1192 (2016).
Kim, M. J., Mende-Siedlecki, P., Anzellotti, S. & Young, L. Theory of Mind following the Violation of Strong and Weak Prior Beliefs. Cereb. Cortex 31, 884–898 (2021).
Park, B. K., Fareri, D., Delgado, M. & Young, L. The role of right temporoparietal junction in processing social prediction error across relationship contexts. Soc. Cogn. Affect. Neurosci. 16, 772–781 (2021).
Hackel, L. M., Doll, B. B. & Amodio, D. M. Instrumental learning of traits versus rewards: Dissociable neural correlates and effects on choice. Nat. Neurosci. 18, 1233–1235 (2015).
Park, S. A., Sestito, M., Boorman, E. D. & Dreher, J. C. Neural computations underlying strategic social decision-making in groups. Nat. Commun. 10, 5287 (2019).
Hackel, L. M., Wills, J. A. & Van Bavel, J. J. Shifting prosocial intuitions: Neurocognitive evidence for a value-based account of group-based cooperation. Soc. Cogn. Affect. Neurosci. 15, 371–381 (2020).
Mars, R. B. et al. Connectivity-based subdivisions of the human right 'temporoparietal junction area': evidence for different areas participating in different cortical networks. Cereb. Cortex 22, 1894–903 (2012).
Schurz, M., Tholen, M. G., Perner, J., Mars, R. B. & Sallet, J. Specifying the brain anatomy underlying temporo-parietal junction activations for theory of mind: A review using probabilistic atlases from different imaging modalities. Hum. Brain Mapp. 38, 4788–4805 (2017).
Fouragnan, E. et al. The macaque anterior cingulate cortex translates counterfactual choice value into actual behavioral change. Nat. Neurosci. 22, 797–808 (2018).
Fouragnan, E., Retzler, C. & Philiastides, M. G. Separate neural representations of prediction error valence and surprise: Evidence from an fMRI meta-analysis. Hum. Brain Mapp. 39, 2887–2906 (2018).
Philiastides, M. G., Biele, G., Vavatzanidis, N., Kazzer, P. & Heekeren, H. R. Temporal dynamics of prediction error processing during reward-based decision making. Neuroimage 53, 221–232 (2010).
O'Reilly, J. X. et al. Dissociable effects of surprise and model update in parietal and anterior cingulate cortex. Proc. Natl Acad. Sci. USA 110, E3660–E3669 (2013).
Izuma, K. The neural basis of social influence and attitude change. Curr. Opin. Neurobiol. 23, 456–462 (2013).
Klucharev, V., Hytönen, K., Rijpkema, M., Smidts, A. & Fernández, G. Reinforcement learning signal predicts social conformity. Neuron 61, 140–151 (2009).
Campbell-Meiklejohn, D., Simonsen, A., Frith, C. D. & Daw, N. D. Independent neural computation of value from other people's confidence. J. Neurosci. 37, 673–684 (2017).
Apps, M. A. J. & Ramnani, N. Contributions of the medial prefrontal cortex to social influence in economic decision-making. Cereb. Cortex 27, 4635–4648 (2017).
Carrington, S. J. & Bailey, A. J. Are there theory of mind regions in the brain? A review of the neuroimaging literature. Hum. Brain Mapp. 30, 2313–2335 (2009).
Piva, M. et al. The dorsomedial prefrontal cortex computes task-invariant relative subjective value for self and other. Elife 8, e44939 (2019).
Feng, C. et al. Prediction of trust propensity from intrinsic brain morphology and functional connectome. Hum. Brain Mapp. 42, 175–191 (2021).
Basile, B. M., Schafroth, J. L., Karaskiewicz, C. L., Chang, S. W. C. & Murray, E. A. The anterior cingulate cortex is necessary for forming prosocial preferences from vicarious reinforcement in monkeys. PLoS Biol. 18, e3000677 (2020).
Apps, M. A. J., Green, R. & Ramnani, N. Reinforcement learning signals in the anterior cingulate cortex code for others' false beliefs. Neuroimage 64, 1–9 (2013).
Lockwood, P. L., Apps, M. A. J. & Chang, S. W. C. Is There a 'Social' Brain? Implementations and algorithms. Trends Cogn. Sci. 24, 802–813 (2020).
Lockwood, P. L., Apps, M. A. J., Valton, V., Viding, E. & Roiser, J. P. Neurocomputational mechanisms of prosocial learning and links to empathy. Proc. Natl Acad. Sci. 113, 9763–9768 (2016).
Saxe, R. & Kanwisher, N. People thinking about thinking people: The role of the temporo-parietal junction in 'theory of mind'. Social Neuroscience: Key Readings 19, 1835–1842 (2013).
Joiner, J., Piva, M., Turrin, C. & Chang, S. W. C. Social learning through prediction error in the brain. npj Sci. Learn. 2, 8 (2017).
Perc, M. & Szolnoki, A. Coevolutionary games-A mini review. BioSystems 99, 109–125 (2010).
Askari, G., Gordji, M. E. & Park, C. The behavioral model and game theory. Palgrave Commun. 5, 57 (2019).
Mobbs, D., Trimmer, P. C., Blumstein, D. T. & Dayan, P. Foraging for foundations in decision neuroscience: Insights from ethology. Nat. Rev. Neurosci. 19, 419–427 (2018).
Gabay, A. S. & Apps, M. A. J. Foraging optimally in social neuroscience: computations and methodological considerations. Soc. Cogn. Affect. Neurosci. 16, 782–794 (2021).
Nee, D. E. fMRI replicability depends upon sufficient individual-level data. Commun. Biol. 2, 130 (2019).
Van Lange, P. A. M. The pursuit of joint outcomes and equality in outcomes: An integrative model of social value orientation. J. Pers. Soc. Psychol. 77, 337–349 (1999).
Devaine, M., Hollard, G. & Daunizeau, J. The Social Bayesian Brain: Does mentalizing make a difference when we learn? PLoS Comput. Biol. 10, e1003992 (2014).
Nakahara, K., Hayashi, T., Konishi, S. & Miyashita, Y. Functional MRI of macaque monkeys performing a cognitive set-shifting task. Science 295, 1532–1536 (2002).
Kagan, I., Iyer, A., Lindner, A. & Andersen, R. A. Space representation for eye movements is more contralateral in monkeys than in humans. Proc. Natl Acad. Sci. USA 107, 7933–7938 (2010).
This work was supported by the Economic and Social Research Council (ESRC; grant ES/L012995/1 to M.G.P.), the Biotechnology and Biological Sciences Research Council (BBSRC; David Phillips Fellowship BB/R010668/2 to M.A.J.A.) and by the UK Research and Innovation (UKRI; grant MR/T023007/1 to E.F.F.). We also thank Frances Crabbe for assistance with data collection.
These authors contributed equally: M. A. Pisauro, E. F. Fouragnan.
These authors jointly supervised this work: M. A. J Apps, M. G. Philiastides.
Department of Experimental Psychology, University of Oxford, Oxford, UK
M. A. Pisauro, E. F. Fouragnan & M. A. J. Apps
Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, UK
M. A. Pisauro & M. A. J. Apps
School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
M. A. Pisauro, E. F. Fouragnan, D. H. Arabadzhiyska & M. G. Philiastides
Brain Research Imaging Center and School of Psychology, Faculty of Health, University of Plymouth, Plymouth, UK
E. F. Fouragnan
M. A. Pisauro
D. H. Arabadzhiyska
M. A. J. Apps
M. G. Philiastides
M.A.P., E.F.F. and M.G.P. designed the experiments. M.A.P. and D.H.A. collected the data. M.A.P., E.F.F., M.A.J.A. and M.G.P. analysed the data and wrote the paper. All authors discussed the results and implications and commented on the manuscript at all stages.
Correspondence to M. A. Pisauro.
Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Pisauro, M.A., Fouragnan, E.F., Arabadzhiyska, D.H. et al. Neural implementation of computational mechanisms underlying the continuous trade-off between cooperation and competition. Nat Commun 13, 6873 (2022). https://doi.org/10.1038/s41467-022-34509-w
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
A group of $N$ students, where $N < 50$, is on a field trip. If their teacher puts them in groups of 8, the last group has 5 students. If their teacher instead puts them in groups of 6, the last group has 3 students. What is the sum of all possible values of $N$?
We are given that $N\equiv 5\pmod{8}$ and $N\equiv 3\pmod{6}$. We begin checking numbers which are 5 more than a multiple of 8, and we find that 5 and 13 are not 3 more than a multiple of 6, but 21 is 3 more than a multiple of 6. Thus 21 is one possible value of $N$. By the Chinese Remainder Theorem, the integers $x$ satisfying $x\equiv 5\pmod{8}$ and $x\equiv 3\pmod{6}$ are those of the form $x=21+\text{lcm}(6,8)k = 21 + 24 k$, where $k$ is an integer. Thus the 2 solutions less than $50$ are 21 and $21+24(1) = 45$, and their sum is $21+45=\boxed{66}$. | Math Dataset |
\begin{document}
\hyphenation{pro-blems sol-ving complexi-ty dia-gram poly-gonal analy-sis ins-tance des-cribed va-lues}
\pagestyle{headings} \addtocmark{Synergistic Computation of Planar Maxima and Convex Hull}
\mainmatter \title{ Synergistic Computation \\of Planar Maxima and Convex Hull }
\titlerunning{Synergistic Computation of Planar Maxima and Convex Hull}
\author{
J\'er\'emy Barbay\inst{1}
\and
Carlos Ochoa\inst{1}\thanks{Corresponding author.} }
\institute{
Departamento de Ciencias de la Computaci\'on, Universidad de Chile, Chile\\
\email{[email protected], [email protected].} }
\maketitle
\begin{abstract}
Refinements of the worst case complexity over instances of fixed input size consider the input order or the input structure, but rarely both at the same time. Barbay et al. [2016] described ``synergistic'' solutions on multisets, which take advantage of the input order and the input structure, such as to asymptotically outperform any comparable solution which takes advantage only of one of those features. We consider the extension of their results to the computation of the \textsc{Maxima Set} and the \textsc{Convex Hull} of a set of planar points. After revisiting and improving previous approaches taking advantage only of the input order or of the input structure, we describe synergistic solutions taking optimally advantage of various notions of the input order and input structure in the plane. As intermediate results, we describe and analyze the first adaptive algorithms for \textsc{Merging Maxima} and \textsc{Merging Convex Hulls}. \end{abstract}
\begin{center}
\begin{minipage}{.9\textwidth}
\noindent{\bf Keywords:}
Convex Hull,
Dominance Query,
Maxima,
Membership Query,
Multivariate Analysis,
Synergistic.
\end{minipage} \end{center}
\section{Introduction}
One way to close the gap between practical performance and the worst case complexity over instances of fixed input size is to refine the later, considering smaller classes of instances. Such measures of difficulty can be seen along two axis: some depend of the \emph{structure} of the input, such as the repetitions in a multiset~\cite{1976-JComp-SortingAndSearchingInMultisets-MunroSpira}, or the positions of the input points in the plane~\cite{2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan} or in higher dimensions~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel}; while some others depend on the \emph{order} in which the input is given, such as for permutations~\cite{1992-ACJ-AnOverviewOfAdaptiveSorting-MoffatPetersson} but also for points in the plane~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell,2011-IEICE-AdaptiveAlgorithmsForPlanarConvexHullProblems-AhnOkamoto}.
Barbay~et al.~\cite{2016-ARXIV-SynergisticSortingAndDeferredDataStructuresOnMultiSets-BarbayOchoaSatty} described various ``synergistic'' solutions on multisets, which take advantage of both the input structure and the input order, in such a way that each of their solutions is never asymptotically slower than other solutions taking advantage of a subset of these features, but also so that on some family of instances, each of their solution performs an order of magnitude faster than any solution taking advantage of only a subset of those features. They left open the generalization of their results to higher dimensions.
In the context of the computation of the \textsc{Maxima Set} and of the \textsc{Convex Hull}, various refinements of the worst case complexity over instances of fixed size have been known for some time.
Kirkpatrick and Seidel described algorithms optimal in the worst case over instances of fixed input and output size, first in 1985 for the computation of the \textsc{Maxima Set} of points in any dimension~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel} and then in 1986 for the computation of the \textsc{Convex Hull} in the plane~\cite{1986-JCom-TheUltimatePlanarConvexHullAlgorithm-KirkpatrickSeidel}: such results can be classified as focused on the \textbf{input structure}, and were further refined in 2009 by Afshani et al.~\cite{2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan} in 2 and 3 dimensions.
Following a distinct approach, Levcopoulos et al.~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} in 2002, and Ahn and Okamoto~\cite{2011-IEICE-AdaptiveAlgorithmsForPlanarConvexHullProblems-AhnOkamoto} in 2011, studied the computation of the \textsc{Convex Hull} in conjunction with various notions of \textbf{input order}: these results can be generalized to take advantage of the input order when computing \textsc{Maxima Sets} (Section~\ref{sec:inputOrderAdaptivePlanarMaxima}) and can be further refined using recent techniques (Section~\ref{sec:inputOrderAdaptiveConvexHull}).
Yet no algorithm (beyond a trivial dovetailing combination of the solutions described above) is known to take advantage of both the \textbf{input structure} and \textbf{input order} at the same time for the computation of the \textsc{Maxima Set} or of the \textsc{Convex Hull} of points, in the plane or in higher dimension, nor for any other problem than \textsc{Sorting Multisets}.
\paragraph{Hypothesis.} It seems reasonable to expect that Barbay et al.'s synergistic results~\cite{2016-ARXIV-SynergisticSortingAndDeferredDataStructuresOnMultiSets-BarbayOchoaSatty} on \textsc{Sorting Multisets} should generalize to similar problems in higher dimension, such as the computation of the \textsc{Maxima Set} and of the \textsc{Convex Hull} of a set of points in the plane.
Yet these two problems present new difficulties of their own:
(1) while the results on multisets~\cite{2016-ARXIV-SynergisticSortingAndDeferredDataStructuresOnMultiSets-BarbayOchoaSatty} are strongly based on a variant of Demaine et al.'s instance optimal algorithm to \textsc{Merge Multisets}~\cite{2000-SODA-AdaptiveSetIntersectionsUnionsAndDifferences-DemaineLopezOrtizMunro}, at this date no such results are known for \textsc{Merging Maxima Sets}, and the closest known result for \textsc{Merging Convex Hulls}~\cite{2008-CCCG-ConvexHullOfTheUnionOfConvexObjectsInThePlane-BarbayChen} (from 2008) is not adaptive to the size of the output, and hence not adaptive to the structure of the instance; furthermore
(2) whereas many \emph{input order adaptive} results are known for \textsc{Sorting Multisets} (with two surveys in 1992 on the topic~\cite{1992-ACJ-AnOverviewOfAdaptiveSorting-MoffatPetersson,1992-ACMCS-ASurveyOfAdaptiveSortingAlgorithms-EstivillCastroWood}, and various additional results~\cite{2009-Chapter-PartialSolutionAndEntropy-Takaoka,2013-TCS-OnCompressingPermutationsAndAdaptiveSorting-BarbayNavarro} since then), it seems that none are known for the computation of the \textsc{Maxima Set} and that only a few are known for the computation of the \textsc{Convex Hull}~\cite{2011-IEICE-AdaptiveAlgorithmsForPlanarConvexHullProblems-AhnOkamoto,2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell}.
\paragraph{Our Results.} \begin{LONG}After reviewing previous results on the computation of \textsc{Convex Hull} taking advantage of either the input structure (Section~\ref{sec:InputStructure}) or the input order (Section~\ref{sec:InputOrder}), and one result on \textsc{Sorting Multisets} taking advantage of both (Section~\ref{sec:synerg}), w\end{LONG}\begin{SHORT} W\end{SHORT}e confirm the hypothesis by
(1) presenting new solutions for \textsc{Merging Maxima Sets} (Section~\ref{sec:quick-maxima}) and \textsc{Merging Convex Hulls} (Section~\ref{sec:UpperHullUnion}) in the plane,
(2) defining new techniques to take advantage of the input order to compute \textsc{Maxima Sets} (Section~\ref{sec:inputOrderAdaptivePlanarMaxima}),
(3) improving previous techniques to analyze the computation of \textsc{Convex Hulls} in function of the input order (Section~\ref{sec:inputOrderAdaptiveConvexHull}), and
(4) synthesizing all those results in synergistic algorithms to compute \textsc{Maxima Sets} (Section \ref{sec:synergMaxima}) and \textsc{Convex Hulls} (Section~\ref{sec:synergisticUpperHull}) of a set of planar points.
For the sake of pedagogy, \textbf{we present our results incrementally, from the simplest to the most complex}. \begin{LONG}
We define formally the notions of input order, input structure and synergistic solution in Section~\ref{sec:back}. \end{LONG} We then describe synergistic solutions for the computation of both the \textsc{Maxima Set} (Section~\ref{sec:maxima}) and of the \textsc{Convex Hull} (Section~\ref{sec:convex}) of points in the plane (the latter requiring more advance techniques). \begin{LONG} In both case, our solution is based on an algorithm merging several partial solutions (Sections~\ref{sec:quick-maxima} and \ref{sec:UpperHullUnion}), adapted from Barbay et al.'s \texttt{Quick Hull Merge} algorithm~\cite{2016-ARXIV-SynergisticSortingAndDeferredDataStructuresOnMultiSets-BarbayOchoaSatty} to merge multisets\footnote{Itself inspired from Demaine et al.'s algorithm solving the same problem~\cite{2001-SODA-lowerboundindex-DemaineLopezOrtiz}.}.
We conclude in Section~\ref{sec:discussion} with a partial list of issues left open for improvement. \end{LONG} Due to space constraints we only state our results in the article and defer all the proofs to the appendix.
\begin{BACK}
\section{Background}
\label{sec:back}
Beyond the worst case complexity over instances of fixed size, the adaptive analysis of algorithms refines the scope of the analysis by considering the worst case complexity over finer classes of instances.
\begin{SHORT}
We describe here a selection of results adaptive to the \emph{input structure}, to the \emph{input order} and to both, \emph{synergistically}.
\end{SHORT}
\begin{VLONG}
We describe here some relevant results along two axis: results about the computation of the \textsc{Convex Hull} which take advantage of the \emph{input structure}\begin{LONG} (Section~\ref{sec:InputStructure})\end{LONG}, results about the computation of the \textsc{Convex Hull} which take advantage of the \emph{input order}\begin{LONG} (Section~\ref{sec:InputOrder})\end{LONG}, and one result about \textsc{Sorting Multisets} which depends of both\begin{LONG} (Section~\ref{sec:synerg})\end{LONG}.
\end{VLONG}
\subsection{Input Structure}
\label{sec:InputStructure}
In 1985, Kirkpatrick and Seidel~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel} described an algorithm to compute the \textsc{Maxima Set} of points in any dimension, which is optimal in the worst case over instances of input size $n$ and output size $h$, running in time within $O(n\log h)$.
One year later, in 1986, they~\cite{1986-JCom-TheUltimatePlanarConvexHullAlgorithm-KirkpatrickSeidel} described a slightly more complex algorithm to compute the \textsc{Convex Hull} in the plane, which is similarly optimal in the worst case over instances of input size $n$ and output size $h$, running in time within $O(n\log h)$.
Both results are described as \emph{output sensitive}, in the sense that the complexity depends on the size of the output, and can be classified as adaptive to the \emph{input structure}, as the position of the points clearly determine the output (and its size), as opposed to algorithms described in the next paragraph taking advantage of the order in which those points are given.
\subsection{Input Order}
\label{sec:InputOrder}
\begin{SHORT}
In 2002, inspired by previous results on \textsc{Sorting Permutations}~\cite{1994-IC-SortingShuffledMonotoneSequences-LevcopoulosPetersson}, Levcopoulos et al.~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} described an adaptive algorithm to compute the \textsc{Convex Hull} of $n$ points in the plane in time within $O(n\lg\kappa)$, where $\kappa$ is the minimal number of simple chains into which the input sequence of points can be partitioned. \end{SHORT} \begin{LONG}
A \emph{polygonal chain} is a curve specified by a sequence of points $p_1, \dots, p_n$. The curve itself consists of the line segments connecting the pairs of consecutive points. A polygonal chain $C$ is \emph{simple} if any two edges of $C$ that are not adjacent are disjoint, or if the intersection point is a vertex of $C$; and any two adjacent edges share only their common vertex. Melkman~\cite{1987-IPL-OnLineConstructionOfTheConvexHullOfASimplePolyline-Melkman} described an algorithm that computes the {\sc{Convex Hull}} of a simple polygonal chain in linear time, and Chazelle~\cite{1991-DCG-TriangulatingASimplePolygonInLinearTime-Chazelle} described an algorithm for testing whether a polygonal chain is simple in linear time.
In 2002, Levcopoulos et al.~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} combined these results to yield an algorithm for computing the {\sc{Convex Hull}} of polygonal chains.
Their algorithm tests if the chain $C$ is simple, using Chazelle's algorithm~\cite{1991-DCG-TriangulatingASimplePolygonInLinearTime-Chazelle}: if the chain $C$ is simple, the algorithm computes the {\sc{Convex Hull}} of $C$ in linear time, using Melkman's algorithm~\cite{1987-IPL-OnLineConstructionOfTheConvexHullOfASimplePolyline-Melkman}. Otherwise, if $C$ is not simple, the algorithm partitions $C$ into the subsequences $C'$ and $C''$, whose sizes differ at most in one; recurses on each of them; and merges the resulting {\sc{Convex Hulls}} using Preparata and Shamos's algorithm~\cite{1985-BOOK-ComputationalGeometryAnIntroduction-PreparataShamos}.
They measured the complexity of this algorithm in terms of the minimum number of simple subchains $\kappa$ into which the chain $C$ can be partitioned. Let $t(n, \kappa)$ be the worst-case time complexity taken by this algorithm for an input chain of $n$ vertices that can be partitioned into $\kappa$ simple subchains. They showed that $t(n, \kappa)$ satisfies the following recursion relation: $t(n, \kappa) \leq t(\lceil \frac{n}{2} \rceil, \kappa_1) + t(\lfloor \frac{n}{2} \rfloor, \kappa_2), \kappa_1 + \kappa_2 \leq \kappa + 1$. The solution to this recursion gives $t(n, \kappa) \in O(n(1{+}\log{\kappa}))\subseteq O(n\log n)$. \end{LONG}
\begin{SHORT}
In 2011, Ahn and Okamoto~\cite{2011-IEICE-AdaptiveAlgorithmsForPlanarConvexHullProblems-AhnOkamoto} considered two variants of the problem, where the output is a permutation of the input such that the \textsc{Convex Hull} can be checked and extracted in linear time from it. They describe an adaptive result directly inspired from a disorder measure introduced for the study of adaptive algorithms for \textsc{Sorting Permutations}, the number \texttt{Inv} of inversions in the permutation~\cite{1992-ACJ-AnOverviewOfAdaptiveSorting-MoffatPetersson,1992-ACMCS-ASurveyOfAdaptiveSortingAlgorithms-EstivillCastroWood}. \end{SHORT} \begin{LONG}
In 2011, Ahn and Okamoto~\cite{2011-IEICE-AdaptiveAlgorithmsForPlanarConvexHullProblems-AhnOkamoto} followed a distinct approach for the computation of the \textsc{Convex Hull}, also based on some notions of input order. They considered a variant of the problem where the output is the same size of the input, but such that the \textsc{Convex Hull} can be checked and extracted in linear time from this output. In this context, they describe adaptive results directly inspired from disorder measures introduced for the study of adaptive algorithms for \textsc{Sorting Permutations}, such as \texttt{Runs} and \texttt{Inv}~\cite{1992-ACJ-AnOverviewOfAdaptiveSorting-MoffatPetersson,1992-ACMCS-ASurveyOfAdaptiveSortingAlgorithms-EstivillCastroWood}. \end{LONG}
Inspired by Ahn and Okamoto's definition~\cite{2011-IEICE-AdaptiveAlgorithmsForPlanarConvexHullProblems-AhnOkamoto}, we define some simple measure of \emph{input order} for the computation of \textsc{Maxima Sets} in Section~\ref{sec:inputOrderAdaptivePlanarMaxima}, and we slightly refine Levcopoulos et al.'s analysis~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} for the computation of \textsc{Convex Hulls} in Section~\ref{sec:inputOrderAdaptiveConvexHull}.
\subsection{Synergistic Solutions} \label{sec:synerg}
Inspired by previous results on sorting multisets in a way adaptive to the frequencies of the element~\cite{1976-JComp-SortingAndSearchingInMultisets-MunroSpira} on one hand, and on sorting permutation in a way adaptive to the distribution of the lengths of the subsequences of consecutive positions already sorted~\cite{2009-Chapter-PartialSolutionAndEntropy-Takaoka} on the other hand, Barbay~et al.~\cite{2016-ARXIV-SynergisticSortingAndDeferredDataStructuresOnMultiSets-BarbayOchoaSatty} described two ``synergistic'' algorithms \textsc{Sorting Multisets}, which take advantage both of the input structure and of the input order, in such a way that each of their solutions is never asymptotically slower than other solutions taking advantage of a subset of these features, but also so that on some family of instances, each of their solution performs an order of magnitude faster than any solution taking advantage of only a subset of those features. They left open the generalization of their results to higher dimensions. We generalize their results to dimension 2, for the computation of the \textsc{Maxima Set} in Section~\ref{sec:maxima}, and for the computation of the \textsc{Convex Hull} in Section~\ref{sec:convex}. \end{BACK}
\section{Maxima Set} \label{sec:maxima}
Given a point $p\in\mathbb{R}^2$, let $p_x$ and $p_y$ denote the $x$- and $y$-coordinates of $p$, respectively. Given two points $p$ and $q$, $p$ \emph{dominates} $q$ if $p_x \ge q_x$ and $p_y \ge q_y$. Given a set $\mathcal{S}$ of points in $d$ dimensions, a point $p$ from $\mathcal{S}$ is called \emph{maximal} if none of the other points of $\mathcal{S}$ dominates $p$. The \textsc{Maxima Set} of such a set $\mathcal{S}$ is the uniquely defined set of all maximal points~\cite{1975-JACM-OnFindingTheMaximaOfASetOfVectors-KungLuccioPreparata}. Kirkpatrick and Seidel~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel} described an algorithm that computes the \textsc{Maxima Set} running in time within $O(n\log h)$, where $n$ is the number of input points, and $h$ is the number of points in the \textsc{Maxima set} (i.e., the size of the output). In 2009, Afshani et al.~\cite{2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan} improved the results on the computation of both the \textsc{Maxima Set} and \textsc{Convex Hull} \begin{LONG}
in dimension 2 and 3 \end{LONG} by taking the best advantage of the relative positions of the points (while ignoring the input order). \begin{INUTILE}
describing ``input order oblivious instance optimal'' algorithms which, for any instance $I$, performs within a constant factor of the performance of any algorithm that does not take advantage of the order of the points in the algebraic decision tree model. \end{INUTILE}
\begin{TODO} CHECK that I am correctly referencing the computational model. \end{TODO}
If the points are sorted by their coordinates (say, in lexicographic order of their coordinates for a fixed order of the dimensions), the \textsc{Maxima Set} can be computed in time linear in the size of the input. Refining this insight, we show in Section~\ref{sec:inputOrderAdaptivePlanarMaxima} that one can take advantage of the input order even if it is not as strictly sorted: this presents a result which is orthogonal to previous input structure adaptive results~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel,2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan}. In order to combine these results synergistically with previous input structure adaptive results~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel,2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan}, we study in Section~\ref{sec:quick-maxima} an algorithm that solves the problem of \textsc{Merging Maxima}, which asks for computing the \textsc{Maxima Set} of the union of maxima sequences, in such a way that it outperforms both input structure adaptive results by taking advantage of the number and sizes of the maxima sequences and of the relative positions between the points in them. Last, we combine those results into a single synergistic algorithm, which decomposes the input sequence of points into several ``easy'' subsequences for which the corresponding \textsc{Maxima Set} can be computed in linear time, and then proceeds to merge them. The resulting algorithm not only outperforms previous input structure adaptive results~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel,2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan} in the sense that it never performs (asymptotically) worse, and performs better when it can take advantage of the input order, it also outperforms a dovetailing combination of previous input structure adaptive algorithms~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel,2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan} and the input order adaptive algorithm described in Section~\ref{sec:inputOrderAdaptivePlanarMaxima}.
\subsection{Input Order Adaptive Maxima Set} \label{sec:inputOrderAdaptivePlanarMaxima}
In many cases the \textsc{Maxima Set} can be computed in time linear in the size of the input, independently from the size of the \textsc{Maxima Set} itself. For instance, consider an order of the input where
(1) the maximal points are given in order sorted by one coordinates, and
(2) for each maximal point $p$, all the points dominated by $p$ are given immediately after $p$ in the input order (in any relative order).
The \textsc{Maxima Set} of a sequence of points given in this order can be extracted and validated in linear time by a simple greedy algorithm, which throws an exception if the input is not in such an order.
Each of the various ways to deal with such exceptions directly yields an input order adaptive algorithm~\cite{2011-IEICE-AdaptiveAlgorithmsForPlanarConvexHullProblems-AhnOkamoto}. \begin{LONG}
For instance, if the point found to be out of order is \emph{inserted} in the partial \textsc{Maxima Set} computed up to this point, this yields an algorithm running in time within $O(n\lg\mathtt{Inv})$ where $\mathtt{Inv}$ is the sum of such insertion costs. \end{LONG}
Let's label such a sequence ``\emph{smooth}'', and by extension any input subsequence of consecutive positions which have the same property. Given an input sequence $\mathcal{S}$, let $\sigma$ denote \begin{INUTILE}
let's call its \emph{smoothness} \end{INUTILE} the minimal number of smooth subsequences into which it can be decomposed. \begin{INUTILE}
, and denote it by $\sigma$ \end{INUTILE} Most interestingly for synergistic purpose, such a decomposition can be computed in time linear in the input size.
Detecting such $\sigma$ smooth subsequences and merging them two by two yields an algorithm running in time within $O(n(1+\log\sigma))$. Such a result is orthogonal to previous input structure adaptive results~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel, 2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan}: it can be worse than $O(n\log h)$ when the output size $h$ is small and the input is in a ``bad'' order, as it can be much better than $O(n\log h)$ when $h$ is large and the input is in a ``good'' order. We show in the next two sections an algorithm which outperforms both.
\subsection{Union of Maxima Sequences} \label{sec:quick-maxima}
We describe the \texttt{Quick Union Maxima} algorithm, which computes the \textsc{Maxima Set} of the union of maxima sequences in the plane, assuming that the points in the maxima sequences are given in sorted order by their $x$-coordinates (i.e., \textsc{Merging Maxima}). This algorithm generalizes Barbay et al.'s~\cite{2016-ARXIV-SynergisticSortingAndDeferredDataStructuresOnMultiSets-BarbayOchoaSatty} \texttt{QuickSort} inspired algorithm for \textsc{Merging Multiset} and is a building block towards the synergistic algorithm for computing the \textsc{Maxima Set} of a set of planar points described in Section~\ref{sec:synergMaxima}. Given a maxima sequence $\mathcal{M}_i$, let $\mathcal{M}_i[a]$ and $\mathcal{M}_i[b..c]$ denote the $a$-th point and the block of consecutive $c-b+1$ points corresponding to positions from $b$ to $c$ in $\mathcal{M}_i$, respectively. \begin{LONG}
As its name indicates, the algorithm is inspired by the \texttt{QuickSort} algorithm. \end{LONG}
\subsubsection{Description of the algorithm Quick Union Maxima.} The \texttt{Quick Union Maxima} algorithm chooses a point $p$ that forms part of the \textsc{Maxima Set} of the union, and discards all the points dominated by $p$. \begin{LONG}
Note that all dominated points do not belong to the maxima that contains $p$. \end{LONG} The selection of $p$ ensures that at least half of the maxima sequences will have points dominated by $p$ or to the right of $p$ and at least half of the maxima sequences will have points dominated by $p$ or to the left of $p$. The algorithm identifies a block $\mathcal{B}$ of consecutive points in the maxima sequence that contains $p$, which forms part of the output \textsc{Maxima Set} ($p$ is contained in $\mathcal{B}$). All the points in $\mathcal{B}$ are discarded. \begin{LONG}
Note that if the points of a maxima sequence in the plane are sorted in ascending order by their $x$-coordinates, then their $y$-coordinates are sorted in decreasing order. This algorithm takes advantage of this fact when it discards points. \end{LONG} All discarded points are identified by doubling searches~\cite{1976-IPL-AnAlmostOptimalAlgorithmForUnboundedSearching-BentleyYao}\begin{LONG}
inside the maxima sequences \end{LONG}. The algorithm then recurses separately on the non-discarded points to the left of $p$ and on the non-discarded points to the right of $p$. (See Algorithm~\ref{alg:qum} for a more formal description.) Next, we analyze the time complexity of the \texttt{Quick Union Maxima} algorithm.
\begin{algorithm}
\caption{\texttt{Quick Union Maxima}}
\label{alg:qum}
\begin{algorithmic}[1]
\REQUIRE{A set $\mathcal{M}_1, \dots, \mathcal{M}_\rho$ of $\rho$ maxima sequences}
\ENSURE{The \textsc{Maxima Set} of the union of $\mathcal{M}_1, \dots, \mathcal{M}_\rho$}
\STATE Compute the median $\mu$ of the $x$-coordinates of the
middle points of the maxima sequences;
\STATE Perform doubling searches for the value $\mu$ in the
$x$-coordinates of the points of all maxima sequences,
starting at both ends of the maxima sequences in parallel;
\STATE Find the point $p$ of maximum $y$-coordinate among the points
$q$ such as $q_x \ge \mu$ in all maxima sequences, note $j\in[1..\rho]$
the index of the maxima sequence containing $p$;
\STATE Discard all points dominated by $p$ through doubling
searches for the values of $p_x$ and $p_y$ in the $x$- and $y$-coordinates
of all maxima, respectively, except $\mathcal{M}_j$ (Search for $p_x$ in the
points $q$ such that $q_x \ge \mu$ and for $p_y$ in the points $q$ such that $q_x<\mu$);
\STATE Find the point $r$ of maximum $y$-coordinate among the
points $q$ such that $q_x > p_x$ in all maxima sequences except $\mathcal{M}_j$
and find the point $\ell$ of maximum $x$-coordinate among the points $q$ such that
$q_y > p_y$ in all maxima sequences except $\mathcal{M}_j$;
\STATE Discard the block in $\mathcal{M}_j$ containing $p$ that forms part of the
output through doubling searches for the values of $\ell_x$ and $r_y$ in the $x$- and
$y$-coordinates of the points in $\mathcal{M}_j$, respectively. (Search for $\ell_x$ in the points
$q$ such that $q_x < p_x$ and for $r_y$ in the points $q$ such that $q_x > p_x$.);
\STATE Recurse separately on the non-discarded points left and right of $p$.
\end{algorithmic} \end{algorithm}
\subsubsection{Complexity Analysis of Quick Union Maxima.} \label{sec:analysis-qum}
Every algorithm for \textsc{Merging Maxima} needs to certify that blocks of consecutive points in the maxima sequences are dominated or are in the \textsc{Maxima Set} of the union. In the following we formalize the notion of a \emph{certificate}, that permits to check the correctness of the output in less time than to recompute the output itself. We define a ``language'' of basic ``arguments'' for such certificates: \emph{domination} (which discards points from the input) and \emph{maximality} (which justify the presence of points in the output) arguments, and their key positions in the instance. A certificate will be verified by checking each of its arguments: those can be checked in time proportional to the number of blocks in them.
\begin{LONG}
\begin{definition}
$\langle \mathcal{M}_i[a] \supset \mathcal{M}_j[b..c] \rangle$
is an \emph{elementary domination argument} if the point
$\mathcal{M}_i[a]$ dominates all the points in the block
$\mathcal{M}_j[b..c]$.
\end{definition} \end{LONG}
\begin{definition}
$\langle \mathcal{M}_i[a] \supset \mathcal{M}_{j_1}[b_1..c_1],
\dots, \mathcal{M}_{j_t}[b_t..c_t] \rangle$ is a \emph{Domination
Argument} if the point $\mathcal{M}_i[a]$ dominates all the points
in the blocks
$\mathcal{M}_{j_1}[b_1..c_1], \dots,
\mathcal{M}_{j_t}[b_t..c_t]$. \end{definition}
\begin{LONG}
\begin{lemma}
A \emph{domination argument} $\langle \mathcal{M}_i[a] \supset \mathcal{M}_{j_1}[b_1..c_1], \dots, \mathcal{M}_{j_t}[b_t..c_t] \rangle$ can be checked in $O(t)$ data comparisons.
\end{lemma} \end{LONG}
It is not enough to eliminate all points that can not participate in the output. Certifying would still require additional work: a correct algorithm must justify the optimality of its output. To this end we define maximality arguments.
\begin{definition}
$\langle \mathcal{M}_i[a..b] \dashv \mathcal{M}_{j_1}[a_1..b_1],
\dots, \mathcal{M}_{j_t}[a_t..b_t] \rangle$ is a \emph{Maximality
Argument} if \textbf{either} the points $\mathcal{M}_i[b]$
dominates the points
$\mathcal{M}_{j_1}[a_1], \dots, \mathcal{M}_{j_t}[a_t]$ and
the $x$-coordinates of the points
$\mathcal{M}_{j_1}[a_1-1], \dots, \mathcal{M}_{j_t}[a_t-1]$ are
less than the $x$-coordinate of the point ${\cal M}_i[a]$
\textbf{or} the point $\mathcal{M}_i[a]$ dominates the points
$\mathcal{M}_{j_1}[b_1], \dots, \mathcal{M}_{j_t}[b_t]$ and
the $y$-coordinates of the points
$\mathcal{M}_{j_1}[b_1+1], \dots, \mathcal{M}_{j_t}[b_t+1]$ are
less than the $y$-coordinate of the point ${\cal M}_i[b]$. \end{definition}
If $\langle \mathcal{M}_i[a..b] \dashv \mathcal{M}_{j_1}[a_1..b_1], \dots, \mathcal{M}_{j_t}[a_t..b_t] \rangle$ is a valid \emph{maximality argument}, then the points in the block $\mathcal{M}_i[a..b]$ are maximal among the maxima sequences $\mathcal{M}_i, \mathcal{M}_{j_1}, \dots, \mathcal{M}_{j_t}$. \begin{LONG}
\begin{lemma}
A \emph{maximality argument} $\langle \mathcal{M}_i[a..b] \dashv \mathcal{M}_{k_1}[a_1..b_1], \dots, \mathcal{M}_{k_t}[a_t..b_t] \rangle$ can be checked in $O(t)$ data comparisons.
\end{lemma} \end{LONG} The difficulty of finding and describing domination and maximality arguments depend on the points they refer to in the maxima sequences, a notion captured by ``argument points'':
\begin{definition}
Given an argument
${\cal A} = \langle \mathcal{M}_i[a] \supset
\mathcal{M}_{j_1}[b_1..c_1], \dots, \mathcal{M}_{j_t}[b_t..c_t]
\rangle$ or
${\cal B} = \langle \mathcal{M}_i[a..b] \dashv
\mathcal{M}_{j_1}[a_1..b_1], \dots, \mathcal{M}_{j_t}[a_t..b_t]
\rangle$, the \emph{Argument Points} are the points $\mathcal{M}_i[a]$ in
${\cal A}$ and $\mathcal{M}_i[a]$ and $\mathcal{M}_i[b]$ in
${\cal B}$. \end{definition}
Those atomic arguments combine into a general definition of a certificate that any correct algorithm for \textsc{Merging Maxima} in the comparison model can be modified to output.
\begin{definition}
Given a set of maxima sequences and their \textsc{Maxima Set} $\mathcal{M}$ expressed as several blocks on the maxima sequences. A \emph{certificate} of $\mathcal{M}$ is a set of domination and maximality arguments such that the \textsc{Maxima Set} of any instance satisfying those arguments is given by the description of $\mathcal{M}$. The length of a certificate is the number of distinct argument points in it. \end{definition}
\begin{INUTILE}
\begin{definition}
Given an instance ${\cal I}$ of the \textsc{Union Maxima} problem,
a certificate ${\cal C}$ for ${\cal I}$ of \emph{minimal length} is
a certificate with the minimal number of distinct argument points.
\end{definition} \end{INUTILE}
\begin{LONG}
We divide the analysis of the time complexity of the \texttt{Quick Union Maxima} algorithm into two lemmas. \end{LONG} \begin{LONG}
We first bound the cumulated time complexity of the doubling searches for the value of the median $\mu$ of the $x$-coordinates of the middle points of the maxima (i.e., step $2$ of Algorithm~\ref{alg:qum}) and the doubling searches in the points discard steps (i.e., steps $4$ and $6$ of Algorithm~\ref{alg:qum}). \end{LONG} The algorithm partitions the maxima sequences into blocks of consecutive discarded points, where each block is discarded because it is dominated, or because this block forms part of the \textsc{Maxima Set} of the union. Each block forms part of some argument of the certificate computed by the algorithm. \begin{SHORT}
This fact is key to bound the cumulated time complexity of the steps $2, 4$ and $6$ of Algorithm~\ref{alg:qum}. The key fact to bound the time complexity of the steps $1,3$ and $5$ of Algorithm~\ref{alg:qum} is that each execution of these steps has time complexity bounded by the number of maxima sequences in the subinstance.\end{SHORT} \begin{VLONG}
\begin{lemma}\label{lem:blocks}
Let $s_{1}, \dots, s_{\beta}$ be the sizes of the $\beta$ blocks into which the algorithm \texttt{Quick Union Maxima} divides the maxima sequence ${\cal M}_i$. The cumulated time complexity of the doubling searches for the value of the medians $\mu$ of the $x$-coordinates of the middle points of the maxima sequences (i.e., step $2$) and the doubling searches in the points discard steps (i.e., steps $4$ and $6$) of the algorithm \texttt{Quick Union Maxima} in the maxima sequence ${\cal M}_i$ is within $O(\sum_{j=1}^{\beta}\log{s_{j}})$.
\end{lemma}
\begin{PROOF}
\begin{proof}
Every time the algorithm finds the insertion rank of one of the medians $\mu$ of the $x$-coordinates of the middle points of the maxima sequences in $\mathcal{M}_i$, it finds a position $d$ inside a blocks whose points will be discarded. The discard points steps that search for the insertion rank of $p_x$ and $p_y$ start the search in the position $d$. The time complexity of both discard points steps are bounded by $O(\log{s_b})$, where $s_b$ is the size of the discarded block $b$. Both discard points steps partition $\mathcal{M}_i$ in positions separating the blocks to the left of $b$, the block $b$ itself and the blocks to the right of $b$.
The combination of the doubling search that finds the insertion rank of $\mu$ in the $x$-coordinates of $\mathcal{M}_i$ with the doubling searches that discard points \begin{LONG}
starting in $d$ \end{LONG} can be represented as a tree. Each internal node has two children, which correspond to the two subproblems into which the recursive steps partition $\mathcal{M}_i$\begin{LONG}
, the blocks to the left of $b$ and the blocks to the right of $b$ \end{LONG}. The cost of this combination is bounded by $O(\log s_b + \log s)$, where $s$ is the minimum between the sum of the sizes of the blocks to the left of $b$ and the sum of the sizes of the blocks to the right of $b$, because of the two doubling searches in parallel. The size of each internal node is the size of the block discarded in this step. The size of each leaf is the sum of the sizes of the blocks in the child subproblem represented by this leaf.
We prove that at each combination of steps, the total cost is bounded by eight times of the sum of the logarithms of the sizes of the nodes in the tree. This is done by induction over the number of steps. If the number of steps is zero then there is no cost. For the inductive step, if the number of steps increases by one, then a new combination of steps is done and a leaf subproblem is partitioned into two new subproblems. At this step, a leaf of the tree is transformed into an internal node and two new leaves are created. Let $w$ and $z$ such that $w \leq z$ be the sizes of the new leaves created. Note that $w$ and $z$ are the sum of the sizes of the blocks to the left and to the right, respectively, of the discarded block $b$ in this step. The cost of this step is less than $4\log{w} + 4\log{b}$. The cost of all the steps then increases by $4\log{w} + 4\log{b}$, and hence eight times the sum of the logarithms of the nodes in the tree increases by $8(\log{w} + \log{z} + \log b - \log({w+z+b}))$. But if $w \ge 3$, $w \ge b$ and $w \le z$ then the result follows. \qed
\end{proof}
\end{PROOF} \end{VLONG} \begin{INUTILE}
The algorithm uses the points $p$, $\ell$ and $r$ to discard points
from the input maxima. The discarded points could be part of the
output or dominated points. We call the points $p$, $\ell$ and $r$
\emph{argument points}.
The number of argument points that the algorithm \texttt{Quick Union
Maxima} uses to discard points is asymptotically the minimum
number of argument points that any other algorithm that computes the
union of $\rho$ maxima needs to use to discard points from the input
maxima.
We formalize this notion by comparing the number of \emph{argument
points} that the \texttt{Quick Union Maxima} uses to discard
points with an algorithm that computes the minimum number needed of
\emph{argument points} when it computes the union of $\rho$ maxima. \end{INUTILE} \begin{LONG}
We bound next the time complexity of the steps that compute the median $\mu$ of the $x$-coordinates of the middles points in the maxima (i.e., step $1$ of Algorithm~\ref{alg:qum}) and the steps that find the points $p$, $\ell$ and $r$ (i.e., steps $3$ and $5$ of Algorithm~\ref{alg:qum}) in the \texttt{Quick Union Maxima} algorithm.
Note that one execution of these steps has time complexity bounded by the number of maxima sequences in the sub-instance. \end{LONG}The partition of the maxima sequences by the $x$-coordinate of $p$ and the discarded points decrease the number of maxima sequences in the subinstances.
\begin{LONG}
\begin{lemma}\label{lem:sequences}
The cumulated number of comparison performed by the steps that compute the median $\mu$ of the $x$-coordinates of the middles points in the $\rho$ maxima sequences (i.e., step $1$) and the steps that find the points $p$, $\ell$ and $r$ (i.e., steps $3$ and $5$) in the \texttt{Quick Union Maxima} algorithm is within $O(\sum^{\delta}_{i=1}\log{\binom{\rho}{m_i}})$, where $\delta$ is the length of the certificate ${\cal C}$ computed by the algorithm and $m_1, \dots, m_\delta$ is a sequence where $m_i$ is the number of maxima sequences whose blocks form the $i$-th argument of ${\cal C}$.
\end{lemma}
\begin{PROOF}
\begin{proof}
We prove this lemma by induction over $\delta$ and $\rho$. The time complexity of one of these steps is linear in the number of maxima sequences in the sub-instance (i.e., ignoring all the empty maxima sequences of this sub-instance).
Let $\mathcal{T}(\delta,k)$ be the cumulated time complexity during the execution of the steps that compute the medians $\mu$ of the $x$-coordinates of the middles points (i.e., step $1$) and during the steps that find the points $p$, $\ell$ and $r$ (i.e., steps $3$ and $5$) in the algorithm \texttt{Quick Union Maxima}.
We prove that $\mathcal{T}(\delta, k) \le \sum^{\delta}_{i=1}m_i\log{\frac{k}{m_i}} - k$, where $m_i$ is the number of maxima sequences whose blocks form the $i$-th argument of ${\cal C}$.
Let $\mu$ be the first median of the $x$-coordinates of the middles points of the maxima sequences computed by the algorithm. Let $c$ and $d$ be the number of maxima sequences that have non-discarded point only above of $p_y$ and to the right of $p_x$, respectively. Let $b$ be the number of maxima sequences that have non-discarded points above $p_y$ and to the right of $p_x$. Let $e$ be the number of maxima sequences which all their points are dominated by $p$. Let $\delta_c$ and $\delta_d$ be the number of arguments computed by the algorithm to discard points in the maxima sequences above $p_y$ and to the right of $p_x$, respectively. Then, $\mathcal{T}(\delta, k) = \mathcal{T}(\delta_c, c+b) + \mathcal{T}(\delta_d, d+b) + k$ because of the two recursive calls and the steps $1$, $3$ and $5$ of the algorithm \texttt{Quick Union Maxima}. By Induction Hypothesis, $\mathcal{T}(\delta_c,c+b) \le \sum^{\delta_c}_{i=1}m_i\log{\frac{c+b}{m_i}} - c - b$ and $\mathcal{T}(\delta_d, d+b) \le \sum^{\delta_d}_{i=1}m_i\log{\frac{d+b}{m_i}} - d - b$. In the worst case $e=0$, and we need to prove that $c+d \le \sum^{\delta_c}_{i=1}m_i \log\left({1 + \frac{d}{c+b}}\right) + \sum^{\delta_d}_{i=1}m_i\left({1 + \frac{c}{d+b}}\right)$, but this is a consequence of $\sum^{\delta_c}_{i=1}m_i \ge c+b, \sum^{\delta_d}_{i=1}m_i \ge d+b$ (the number of discarded blocks is greater than or equal to the number of maxima sequences); $c \le d+b, d \le c + b$ (at least $\frac{k}{2}$ maxima sequences are left to the left and to the right of $\mu$); and $\log\left({1 + \frac{y}{x}}\right)^x \ge y$ for $y \le x$. \qed \end{proof} \end{PROOF}
Combining Lemma~\ref{lem:blocks} and Lemma~\ref{lem:sequences} yields an upper bound on the number of data comparisons performed by the algorithm \texttt{Quick Union Maxima}: \end{LONG}
\begin{theorem}\label{theo:qum}
Given $\rho$ maxima sequences. The \texttt{Quick Union Maxima} algorithm performs within $O(\sum_{j=1}^{\beta}\log{s_{j}} + \sum^{\delta}_{i=1}\log{\binom{\rho}{m_i}})$ data comparisons when it computes the \textsc{Maxima Set} of the union of these maxima sequences; where $\beta$ is the number of blocks in the certificate ${\cal C}$ computed by the algorithm; $s_1, \dots, s_\beta$ are the sizes of these blocks; $\delta$ is the length of ${\cal C}$; and $m_1, \dots, m_\delta$ is a sequence where $m_i$ is the number of maxima sequences whose blocks form the $i$-th argument of ${\cal C}$.
\begin{LONG}
This number of comparisons is optimal in the worst case over instances formed by $\rho$ maxima sequences that have certificates ${\cal C}$ formed by $\beta$ blocks of sizes $s_1, \dots, s_\beta$ and length $\delta$ such that $m_1, \dots, m_\delta$ is a sequence where $m_i$ is the number of maxima sequences whose blocks form the $i$-th argument of ${\cal C}$.
\end{LONG} \end{theorem}
The optimality of this algorithm is a consequence of the fact that it checks each arguments of any certificate using a constant number of argument points. \begin{VLONG}
We prove next that the \texttt{Quick Union Maxima} algorithm computes a \emph{certificate} which length is a constant factor of the length of the certificate of minimal length. Consider the following algorithm for \textsc{Merging Maxima}. The \texttt{Left-to-Right} algorithm chooses the first points from left to right of each maxima sequence and computes the points $u$ and $v$ of maximum and second maximum $y$-coordinate among these points, respectively. Let $\mathcal{M}_i$ be the maxima sequence that contains $u$. Let $a$ be the index of $u$ in $\mathcal{M}_i$. The $y$-coordinates of the points in the input maxima are sorted in decreasing order from left to right. The algorithm searches then for the insertion rank of $v_y$ in $\mathcal{M}_i$. Let $b$ be the index of the rightmost point $g$ in $\mathcal{M}_i$ such that $g_y > v_y$. The block $\mathcal{M}_i[a..b]$ form part of the \textsc{Maxima Set} of the union and are discarded by the \texttt{Left-to-Right} algorithm. If $g$ dominates $v$, the algorithm discards all points in the input dominated by $g$. The algorithm restarts the computation on the non-discarded points.
\begin{lemma} The \texttt{Left-to-Right} algorithm computes a certificate of minimal length when it computes the union of $\rho$ maxima sequences. \end{lemma}
\begin{PROOF} \begin{proof}
Let $u$ and $v$ be the points with maximum and second maximum $y$-coordinate among the first points from left to right of the non-discarded points of the maxima sequences. Let $\mathcal{M}_i$ be the maxima sequence that contains $u$. Then, all points in $\mathcal{M}_i$ with $y$-coordinate greater than $v_y$ could be discarded (i.e., form part of the output) and $v$ is the point in the maxima sequences that allows to discard the greatest number of consecutive points including $u$ in $\mathcal{M}_i$. Let $g$ and $h$ be consecutive points in $\mathcal{M}_i$ such that $g_y > v_y > h_y$. If $g$ dominates $v$, then $g$ is the rightmost point in $\mathcal{M}_i$ that dominates $v$. Hence, $g$ is the point in $\mathcal{M}_i$ that dominates the maximum number of consecutive points including $v$ in the maxima sequence that contains $v$. These two arguments are enough to prove that the algorithm computes the minimum number of \emph{argument points}.\qed \end{proof} \end{PROOF}
Let $a$ be the number of \emph{argument points} of the certificate computed by the \texttt{Left-to-Right} algorithm. The number of \emph{argument points} of the certificate that the algorithm \texttt{Quick Union Maxima} computes is within a constant factor of $a$. \end{VLONG}
\begin{lemma}\label{lem:opt-max}
The algorithm \texttt{Quick Union Maxima} computes a certificate
which length is a constant factor of the length of the certificate of minimal length. \end{lemma}
\begin{PROOF}
\begin{proof}
Suppose that there is a block $b$ of consecutive points in a maxima sequence that the \texttt{Left-to-Right} algorithm discards because it identifies that the points in $b$ are in the output. Suppose that the algorithm \texttt{Quick Union Maxima} running in the same input computes a point $p$ ($p$ is the point of maximum $y$-coordinate among the points of $x$-coordinate greater than $\mu$, then $p$ is in the output) that is contained in $b$. Let $r$ be the point of maximum $y$-coordinate among the points with $x$-coordinate greater than $p_x$ computed by the \texttt{Quick Union Algorithm}. Let $h$ be the \emph{argument point} used by the \texttt{Left-to-Right} algorithm to identify the rightmost point in $b$. Hence, $r_y < h_y$. Let $\ell$ be the point of maximum $x$-coordinate among the points with $y$-coordinate greater than $p_y$ computed by the \texttt{Quick Union Maxima} algorithm. Let $u$ be the \emph{argument point} used by the \texttt{Left-to-Right} algorithm to discard dominated points before the identification of $b$. Hence, $\ell$ is the same point as $u$. So, the algorithm \texttt{Quick Union Maxima} discards at least the block $b$ using a constant number of \emph{argument points}. The result follows.\qed
\end{proof} \end{PROOF}
\begin{LONG}
\begin{minipage}[c]{.45\textwidth}
\centering
\includegraphics[scale=1]{maxima}
\end{minipage}
\begin{minipage}[c]{.45\textwidth}
\captionof{figure}{A representation of a state of the \texttt{Quick Union Algorithm} where the points $p$, $\ell$ and $r$ has been computed.}
\label{fig:instance}
\end{minipage}~\\ \end{LONG}
In the following section we describe a synergistic result that combines the results of Sections~\ref{sec:inputOrderAdaptivePlanarMaxima} and~\ref{sec:quick-maxima}. This result introduces the synergistic technique also used in the computation of the \textsc{Convex Hull} in Section~\ref{sec:synergisticUpperHull}.
\subsection{Synergistic Computation of Maxima Sets} \label{sec:synergMaxima}
The \texttt{(Smooth,Structure) Synergistic Maxima} algorithm decomposes the input of planar points into the minimal number $\sigma$ of smooth subsequences of consecutive positions, computes their maxima sequences and then merges them using the \texttt{Quick Union Maxima} algorithm described in the previous section.
\begin{theorem}\label{theo:syn-max}
Let $\mathcal{S}$ be a set of points in the plane such that ${\cal S}$ can be divided into $\sigma$ smooth maxima sequences. Let $h$ be the number of points in the \textsc{Maxima Set} of ${\cal S}$. There exists an algorithm that performs within $2n + O(\sum_{j=1}^{\beta}\log{s_{j}} + \sum^{\delta}_{i=1}\log{\binom{\sigma}{m_i}}) \subseteq O(n\log(\min(\sigma, h)))$~\footnote{The quantity $\sum_{j=1}^{\beta}\log{s_{j}}$ is within $O(n)$ but is much smaller for ``easy'' instances.} data comparisons when it computes the \textsc{Maxima Set} of $\mathcal{S}$; where $\beta$ and $s_1, \dots, s_\beta$ are the number and sizes of the blocks in the certificate ${\cal C}$ computed by the union algorithm, respectively; $\delta$ is the length of ${\cal C}$; and $m_1, \dots, m_\delta$ is a sequence where $m_i$ is the number of maxima sequences whose blocks form the $i$-th argument of ${\cal C}$. This number of comparisons is optimal in the worst case over instances $\mathcal{S}$ formed by $\sigma$ smooth sequences which \textsc{Maxima Set} have certificates ${\cal C}$ of length $\delta$ formed by $\beta$ blocks of sizes $s_1, \dots, s_\beta$, such that $m_1, \dots, m_\delta$ is a sequence where $m_i$ is the number of maxima sequences whose blocks form the $i$-th argument of ${\cal C}$. \end{theorem}
\begin{VLONG}
We prove that the number of comparisons performed by the algorithm \texttt{(Smooth, Structure) Synergistic Maxima} is asymptotically optimal in the worst case over instances formed by $n$ points grouped in $\sigma$ smooth sequences, with a final \textsc{Maxima Set} of size $h$.
The upper bound is a consequence of the Theorem~\ref{theo:qum} and the linear time partitioning algorithm described in Section~\ref{sec:inputOrderAdaptivePlanarMaxima}. We describe the intuition for the lower bound below: it is a simple adversary argument, based on the definition of a family of ``hard'' instances for each possible value of the parameters of the analysis, building over each other.
First, we verify the lower bound for ``easy'' instances, of finite difficulty:
general instances formed by a single ($\sigma=1$) smooth sequence obviously require $\Omega(n)$ comparisons (no correct algorithm can afford to ignore a single point of the input, which could dominate all others), while
general instances dominated by a single point ($h=1$) also require $\Omega(n)$ comparisons (similarly to the computation of the maximum of an unsorted sequence).
Each of this lower bound yields a distribution of instances, either of smoothness $\sigma=1$ or of output size $h=1$, such that any deterministic algorithm performs $\Omega(n)$ comparisons on average on a uniform distribution of those instances.
Such distributions of ``elementary'' instances can be duplicated so that to produce various distributions of elementary instances; and combined so that to define a distribution of harder instances:
\begin{lemma}
Given the positive integers $n,\sigma, \beta, s_1, \ldots, s_\beta, \delta, m_1, \ldots, m_\delta$,
there is a family of instances which can each be partitioned into $\sigma$ smooth sequences such that,
\begin{itemize}
\item $\beta$ and $s_1, \ldots, s_\beta$ are the number and sizes of the blocks in the certificate ${\cal C}$ computed by the union algorithm, respectively;
\item $\delta$ is the length of ${\cal C}$;
\item $m_1, \ldots, m_\delta$ is a sequence where $m_i$ is the number of maxima sequences of the smooth sequences whose blocks form the $i$-th argument of ${\cal C}$; and
\item on average on a uniform distribution of these instances, any algorithm computing the \textsc{Maxima Set} of $S$ in the comparison model performs within $\Omega(n + \sum^{\delta}_{i=1}\log{\binom{\sigma}{m_i}})$ comparisons.
\end{itemize}
\end{lemma}
Finally, any such distribution with a computational lower bound on average yields a computational lower bound for the worst case instance complexity of any randomized algorithm, on average on its randomness; and as a particular case a lower bound on the worst case complexity of any deterministic algorithm:
\begin{corollary}
Given the positive integers $\sigma, \beta, s_1, \ldots, s_\beta, \delta, m_1, \ldots, m_\delta$, and an algorithm $A$ computing the \textsc{Maxima Set} of a sequence of $n$ planar points in the comparison model (whether deterministic or randomized), there is an instance $I$ such that $A$ performs a number of comparisons within $\Omega(n + \sum^{\delta}_{i=1}\log{\binom{\sigma}{m_i}})$ when it computes the \textsc{Maxima Set} of $I$. \end{corollary} \begin{proof}
\begin{LONG}
A direct application of Yao's minimax principle \cite{1958-PJM-OnGeneralMinimaxTheorems-Sion,1977-FOCS-ProbabilisticComputationsTowardAUnifiedMeasureOfComplexity-Yao,1944-BOOK-TheoryOfGamesAndEconomicBehavior-VonNeumannMorgenstern}. \qed
\end{LONG}
\begin{SHORT}
A direct application of Yao's minimax principle \cite{1958-PJM-OnGeneralMinimaxTheorems-Sion}. \qed
\end{SHORT} \end{proof} \end{VLONG}
The histories of the computation of the \textsc{Maxima Set} and of the computation of the \textsc{Convex Hull} are strongly correlated: most of the results on one problem also generalize on the other one. Our results on the computation of the \textsc{Maxima Set} similarly generalize to the computation of the \textsc{Convex Hull}, albeit it requires additional work and concepts, which we describe in the next section.
\section{Convex Hull} \label{sec:convex}
Given a set ${\cal S}$ of planar points, the \textsc{Convex Hull} of ${\cal S}$ is the smallest convex set containing ${\cal S}$~\cite{1985-BOOK-ComputationalGeometryAnIntroduction-PreparataShamos}. Given $n$ points in the plane, the problem of computing their \textsc{Convex Hull} is well studied: the worst case complexity over instances of size $n$ is within $\Theta(n\lg n)$ in the algebraic decision tree computational model~\cite{1977-CACM-ConvexHullsOfFiniteSetsOfPointsInTwoAndThreeDimensions-PreparataHong}. Several refinements of this analysis are known: some taking advantage of the input structure~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel, 2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan} and some taking advantage of the input order~\cite{2011-IEICE-AdaptiveAlgorithmsForPlanarConvexHullProblems-AhnOkamoto,2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell}.
\begin{INUTILE}
Inspired by the results on the computation of \textsc{Maxima Sets} taking advantage of both the input order and the input structure presented in Section~\ref{sec:maxima}, we present similar results on the computation of the \textsc{Convex Hull} which take advantage both of the input order (as defined by Levcopoulos \emph{et al.}~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell}) and of the input structure (as defined by Afshani \emph{et al.}~\cite{2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan}) at the same time in a synergistic way. \end{INUTILE}
Levcopoulos et al.~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} described how to \emph{partition} a sequence of points into subsequences of consecutive positions for which the \textsc{Convex Hull} can be computed in linear time. We refine their analysis to take into account the distribution of the sizes of the subsequences (Section~\ref{sec:inputOrderAdaptiveConvexHull}). This notion of input order for the computation of the \textsc{Convex Hull} is less restrictive than the one seen for the computation of the \textsc{Maxima Set}, in the sense that it allows to consider more sophisticated sequences as ``easy'' sequences.
As the computation of \textsc{Convex Hulls} reduces to the computation of \textsc{Upper Hulls}\begin{LONG} (the computation of the \textsc{Lower Hull} is symmetric and completes it into the computation of the \textsc{Convex Hull})\end{LONG}, we focus on the latter.
We describe an algorithm \textsc{Merging Upper Hulls} in Section~\ref{sec:UpperHullUnion}, which yields a synergistic algorithm taking advantage of both the input order and the input structure in Section~\ref{sec:synergisticUpperHull}. This synergistic algorithm outperforms both the algorithms described by Levcopoulos et al.~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} and Afshani et al.~\cite{2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan}, as well as any dovetailing combination of them.
\begin{TODO} CHECKOUT \cite{1990-BIT-ASublogarithmicConvexHullAlgorithm-FjallstromKatajainenLevcopoulosPetersson}, I don't remember what results it has? \end{TODO}
\subsection{Input Order Adaptive Convex Hull} \label{sec:inputOrderAdaptiveConvexHull}
A \emph{polygonal chain} is a curve specified by a sequence of points $p_1, \dots, p_n$. The curve itself consists of the line segments connecting the pairs of consecutive points. A polygonal chain is \emph{simple} if \begin{SHORT}
it does not have a self-intersection. \end{SHORT} \begin{LONG}
any two edges of $P$ that are not adjacent are disjoint, or if the intersection point is a vertex of $P$; and any two adjacent edges share only their common vertex. \end{LONG} Levcopoulos et al.~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} described an algorithm that computes the \textsc{Convex Hull} of $n$ planar points in time within $O(n\log\kappa)$, where $\kappa$ is the minimal number of simple chains into which the input sequence of points can be partitioned. The algorithm partitions the points into simple subchains, computes their \textsc{Convex Hulls}, and merges them. In their analysis the complexity of both the partitioning and merging steps are within $O(n\log\kappa)$. In Section~\ref{sec:synergisticUpperHull}, we describe a partitioning algorithm running in linear time, which is key to the synergistic result. \begin{LONG}
For a given polygonal chain, there can be several partitions into simple subchains of minimum size for it. \end{LONG} We describe a refined analysis which takes into account the relative imbalance between the sizes of the subchains. \begin{LONG}
The idea behind the refinement is to bound the number of operations that the algorithm executes for every simple subchain.
This analysis makes it possible to identify families of instances where the complexity of the algorithm is linear even though the number of simple subchains into which the chain is split is logarithmic. \end{LONG} \begin{LONG}
In the recursion tree of the execution of the algorithm described by Levcopoulos \emph{et al.}~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} on an input $C$ formed by $n$ planar points, every node represents a subchain of $C$. The cost of every node is linear in the size of the subchain that it represents. The simplicity test and the merge process are both linear in the number of points in the subchain. Every time this algorithm discovers that the polygonal chain is simple, the corresponding node in the recursion tree becomes a leaf. \end{LONG}
\begin{theorem} \label{theo:simple} Given a sequence $S$ of $n$ planar points which can be partitioned into $\kappa$ simple subchains of respective sizes $n_1,\ldots,n_\kappa$, Levcopoulos et al.'s algorithm~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} computes the convex hull of $S$ in time within $O(n(1+\mathcal{H}(n_1, \dots, n_{\kappa}))) \subseteq O(n(1{+}\log{\kappa})) \subseteq O(n\log{n})$, where $\mathcal{H}(n_1, \dots, n_\kappa) = \sum_{i=1}^\kappa{\frac{n_i}{n}}\log{\frac{n}{n_i}}$\begin{SHORT}.\end{SHORT} \begin{LONG} , which is worst-case optimal over instances of $n$ points that can be partitioned into $\kappa$ simple subchains of sizes $n_1, \dots, n_{\kappa}$. \end{LONG} \end{theorem} \begin{INUTILE}
\begin{theorem} \label{theo:simple} Given a sequence $S$ of $n$ planar points, Levcopoulos et al.'s algorithm~\cite{2002-SWAT-AdaptiveAlgorithmsForConstructingConvexHullsAndTriangulationsOfPolygonalChains-LevcopoulosLingasMitchell} computes the convex hull of $S$ in time within $O(n(1+\alpha)) \subseteq O(n(1{+}\log{\kappa})) \subseteq O(n\log{n})$, where $\alpha$ is the minimal entropy $\min\{\mathcal{H}(n_1, \dots, n_{\kappa})$ any partition of $S$ into
$\kappa$ simple subchains of consecutive positions, of respective sizes $n_1,\ldots,n_\kappa}\}$ and $\mathcal{H}(n_1, \dots, n_\kappa) = \sum_{i=1}^\kappa{\frac{n_i}{n}}\log{\frac{n}{n_i}}$. \begin{LONG}
, which is worst-case optimal over instances of $n$ points that can be partitioned into $\kappa$ simple subchains of sizes $n_1, \dots, n_{\kappa}$. \end{LONG} \end{theorem} \end{INUTILE} \begin{PROOF}
\begin{proof}
Fix the subchain $c_i$ of size $n_i$. In the worst case, the algorithm considers the $n_i$ points of $c_i$ for the simplicity test and the merging process, in all the levels of the recursion tree from the first level to the level $\lceil \log{\frac{n}{n_i}} \rceil + 1$, because the sizes of the subchains in these levels are greater than $n_i$. In the next level, one of the nodes $\ell$ of the recursion tree fits completely inside $c_i$ and therefore it becomes a leaf. Hence, at least $\frac{n_i}{4}$ points from $c_i$ are discarded for the following iterations. The remaining points of $c_i$ are in the left or the right ends of subchains represented by nodes in the same level of $\ell$ in the recursion tree. In all of the following levels, the number of operations of the algorithm involving points from $c_i$ can be bounded by the size of the subchains in those levels. So, the sum of the number of the operations in these levels is within $O(n_i)$. As a result, the number of operations of the algorithm involving points from $c_i$ is within $O(n_i\log{\frac{n}{n_i}} + n_i)$. In total, the time complexity of the algorithm is within $O(n + \sum_{i=1}^\kappa n_i\log{\frac{n}{n_i}}) = O(n(1+\mathcal{H}(n_1, \dots, n_\kappa))) \subseteq O(n(1{+}\log{\kappa})) \subseteq O(n\log{n})$.
We prove the optimality of this complexity in the worst-case over instances of $n$ points that can be partitioned into $\kappa$ simple subchains of sizes $n_1, \dots, n_{\kappa}$ by giving a tight lower bound. Barbay and Navarro~\cite{2013-TCS-OnCompressingPermutationsAndAdaptiveSorting-BarbayNavarro} showed a lower bound of $\Omega(n(1+{\cal H}(r_1,\ldots,r_\rho)))$ in the comparison model for {\sc{Sorting}} a sequence of $n$ numbers, in the worst case over instances covered by $\rho$ runs (increasing or decreasing) of sizes $r_1, \dots, r_\rho$, respectively, summing to $n$. The {\sc{Sorting}} problem can be reduced in linear time to the problem of computing the {\sc{Convex Hulls}} of a chain of $n$ planar points that can be partitioned into $\rho$ simple subchains of sizes $r_1, \dots, r_\rho$, respectively. For each real number $r$, this is done by producing a point with $(x,y)$-coordinates $(r,r^2)$. The $\rho$ runs (alternating increasing and decreasing) are transformed into $\rho$ simple subchains of the same sizes. The sorted sequence of the numbers can be obtained from the {\sc{Convex Hull}} of the points in linear time. \qed
\end{proof} \end{PROOF}
Similarly to the computation of the \textsc{Maxima Set}, we define in the following section an algorithm for \textsc{Merging Upper Hulls}. This algorithm is a building block towards the synergistic algorithm that computes the \textsc{Convex Hull} of a set of planar points, and is more complex than that for \textsc{Merging Maxima}.
\subsection{Union of Upper Hulls} \label{sec:UpperHullUnion}
We describe the \texttt{Quick Union Hull} algorithms which computes the \textsc{Upper Hull} of the union of $\rho$ upper hull sequences in the plane assuming that the upper hull sequences are given in sorted order by their $x$-coordinates. Given an upper hull sequence $\mathcal{U}_i$, let $\mathcal{U}_i[a]$ and $\mathcal{U}_i[b..c]$ denote the $a$-th point and the block of $c-b+1$ consecutive points corresponding to the positions from $a$ to $b$ in $\mathcal{U}_i$, respectively. Given two points $p$ and $q$, let $m(p,q)$ denote the slope of the straight line that passes trough $p$ and $q$.
\subsubsection{Description of the algorithm Quick Union Hull.}
The \texttt{Quick Union Hull} algorithm is inspired by an algorithm described by Chan et al.~\cite{1997-DCG-PrimalDividingAndDualPruningOutputSensitiveConstructionOfFoudDimensionalPolytopesAndThreeDimensionalVoronoiDiagrams-ChanSnoeyinkYap}. It chooses an edge of slope $\mu$ from the upper hull sequences, and computes the point $p$ that has a supporting line of slope $\mu$. The algorithm then splits the points in the upper hull sequences by $p_x$. It computes the two tangents of $p$ with all the upper hull sequences: the one to the left of $p$ and the one to the right of $p$, and discards all the points below these tangents. The algorithm also computes a block of consecutive points in the upper hull sequence that contains $p$ which points are part of the output, and discards the points in this block ($p$ is in this block). (This last step is key to the optimality of the algorithm and is significantly more complex than its counterpart in the \textsc{Merging Maxima} solution.) The algorithm then recurses on the non-discarded points to the left of $p$ and on the non-discarded points to the right of $p$. All these steps take advantage of the fact that the points in the upper hull sequences are sorted in increasing order of their $x$-coordinates and the slopes of the edges of the upper hull sequences are monotonically decreasing from left to right. (See Algorithm~\ref{alg:quh} for a more formal description.)
\begin{algorithm}[t]
\caption{\texttt{Quick Union Hull}}
\label{alg:quh}
\begin{algorithmic}[1]
\REQUIRE{A set $\mathcal{U}_1, \dots, {\cal U}_\rho$ of $\rho$ upper hull sequences}
\ENSURE{The \textsc{Upper Hull} of the union of the set $\mathcal{U}_1, \dots, {\cal U}_\rho$}
\STATE Compute the median $\mu$ of the slopes of the middle edges of the
upper hull sequences;
\STATE Find the point $p$ that has a supporting line of slope $\mu$ through
doubling searches for the value $\mu$ in the slopes of the edges of all upper hull sequences,
starting at both ends in parallel, note $j\in[1..\rho]$
the index of the upper hull sequence containing $p$;
\STATE Perform doubling searches for the value $p_x$ in the
$x$-coordinates of the points of all upper hull sequences except ${\cal U}_j$, starting at both
ends in parallel;
\STATE Find the two tangents of $p$ with all upper hull sequences: the one to the left of $p$
and the one to the right of $p$, through doubling searches testing for each point $q$ the slope
of the two edge that have $q$ as end point and the slope of the line $pq$ and discard
the points below these tangents.
\STATE Discard a block in ${\cal U}_j$ containing $p$ that form part of the output by
computing the tangent between ${\cal U}_j$ and the upper hull sequences left of $p$ of
minimum slope and the tangent between ${\cal U}_j$ and the upper hull sequences right of $p$
of maximum slope.
\STATE Repeat until there is no more than one upper hull sequence of size $1$:
pair those left of $p$ and pair those right of $p$ and apply the
Step $4$ to those pairs;
\STATE Discard all points that lie below the lines that joins $p$ with the leftmost
point and the rightmost point of the upper hull sequences;
\STATE Recurse on the non-discarded points left and right of $p$.
\end{algorithmic} \end{algorithm}
In the following we describe in more details the Step $5$ of Algorithm~\ref{alg:quh}. We describe only how to compute a block of consecutive points to the right of $p$ in ${\cal U}_j$ that form part of the output, as the left counterpart is symmetric. Let $\tau$ be the tangent of maximum slope between ${\cal U}_j$ and the upper hull sequences to the right of $p$. Let $q$ be the point in ${\cal U}_j$ that lies in $\tau$. Let $\lambda$ be the tangent of maximum slope among those computed at Step $4$. \begin{LONG}
All the points to the right of $p$ are below $\lambda$ and \end{LONG} $\lambda$ is a separating line between the portion of ${\cal U}_j$ that contains $q$ and the points to the right of $p$. Given two upper hull sequences ${\cal U}_i$ and ${\cal U}_k$ separated by a vertical line, Barbay and Chen~\cite{2008-CCCG-ConvexHullOfTheUnionOfConvexObjectsInThePlane-BarbayChen} described an algorithm that computes the common tangent between ${\cal U}_i$ and ${\cal U}_k$ in time within $O(\log a + \log b)$, where ${\cal U}_i[a]$ and ${\cal U}_k[b]$ are the points that lie in the tangent. At each step this algorithm considers the points ${\cal U}_i[c]$ and ${\cal U}_k[d]$ and can certify at least in one upper hull sequence if the tangent is to the right or to the left of the point considered. A minor variant manages the case where the separating line is not vertical. Algorithm~\ref{alg:quh} executes several instances of this algorithm in parallel between ${\cal U}_j$ and all upper hull sequences to the right of $p$, always considering the same point in ${\cal U}_j$ (similarly to the Demaine et al.'s algorithm~\cite{2000-SODA-AdaptiveSetIntersectionsUnionsAndDifferences-DemaineLopezOrtizMunro} to compute the intersection of sorted set). Once all parallel decisions about the point ${\cal U}_j[c]$ are made, the instances can be divided into two sets: (i) those whose tangents are to the left of ${\cal U}_j[c]$ and (ii) those whose tangents are to the right of ${\cal U}_j[c]$. Algorithm~\ref{alg:quh} stops the parallel computation of tangents in the set of maxima sequences (ii). The Step $5$ continues until there is just one instance running and computes the tangent $\tau$ in this instance.
\subsubsection{Analysis of the Quick Union Hull Algorithm.} \label{sec:analysisQUH}
Similarly to the case of \textsc{Merging Maxima}, every algorithm for \textsc{Merging Upper Hulls} needs to certify that some blocks of the upper hull sequences can not participate in the \textsc{Upper Hull} of the union, and that some blocks of the upper hull sequences are in the \textsc{Upper Hull} of the union. \begin{INUTILE}
that computes the \textsc{Upper Hull} of the union of $\rho$ upper hull sequences needs to certify that some blocks of those can not participate in the \textsc{Upper Hull} of the union, and that some blocks of those are in the \textsc{Upper Hull} of the union. \end{INUTILE} In the following we formalize the notion of a \emph{certificate} for \textsc{Merging Upper Hulls} problem.
\begin{definition}
Given the points $\mathcal{U}_i[a]$ and $\mathcal{U}_j[b]$, let $\ell$ be the straight line that passes through $\mathcal{U}_i[a]$ and $\mathcal{U}_j[b]$ and let $m_\ell$ be the slope of $\ell$. $\langle \mathcal{U}_i[a], \mathcal{U}_j[b] \supset \mathcal{U}_k[c..d..e] \rangle$ is an \emph{Elementary Eliminator Argument} if all the points of the block $\mathcal{U}_k[c..e]$ are between the vertical lines through $\mathcal{U}_i[a]$ and $\mathcal{U}_j[b]$, $m(\mathcal{U}_k[d-1], \mathcal{U}_k[d]) \ge m_\ell \ge m(\mathcal{U}_k[d], \mathcal{U}_k[d+1])$, and the point $\mathcal{U}_k[d]$ lies below $\ell$. \end{definition}
If $\langle \mathcal{U}_i[a], \mathcal{U}_j[b] \supset \mathcal{U}_k[c..d..e] \rangle$ is an elementary eliminator argument then the points in the block $\mathcal{U}_k[c..e]$ can not participate in the \textsc{Upper Hull} of the union. \begin{LONG}
\begin{lemma}
An elementary eliminator argument $\langle \mathcal{U}_i[a], \mathcal{U}_j[b] \supset \mathcal{U}_k[c..d..e] \rangle$ can be checked in constant time.
\end{lemma} \end{LONG} Several blocks that are ``eliminated'' by the same pair of points can be combined into a single argument, a notion captured by the \emph{block eliminator argument}.
\begin{definition}
$\langle \mathcal{U}_i[a], \mathcal{U}_j[b] \supset \mathcal{U}_{k_1}[c_1..d_1..e_1], \dots, \mathcal{U}_{k_t}[c_t..d_t..e_t] \rangle$ is a \emph{Block Eliminator Argument} if $\langle \mathcal{U}_i[a], \mathcal{U}_j[b] \supset \mathcal{U}_{k_1}[c_1..d_1..e_1] \rangle, \dots, \langle \mathcal{U}_i[a], \mathcal{U}_j[b] \supset \mathcal{U}_{k_t}[c_t..d_t..e_t] \rangle$ are elementary eliminator arguments. \end{definition}
\begin{LONG} A block eliminator argument is checked by checking each elementary eliminator argument that form it.
\begin{corollary}
A block eliminator argument $\langle \mathcal{U}_i[a], \mathcal{U}_j[b] \supset \mathcal{U}_{k_1}[c_1..d_1..e_1], \dots,\mathcal{U}_{k_t}[c_t..d_t..e_t] \rangle$ can be checked in time within $O(t)$.
\end{corollary} \end{LONG}
As for \textsc{Merging Maxima}, any correct algorithm for \textsc{Merging Upper Hulls} must certify that some points are part of the output.
\begin{definition}
$\langle \mathcal{U}_i[a] \dashv \mathcal{U}_{j_1}[b_1], \dots, \mathcal{U}_{j_t}[b_t] \rangle$ is an \emph{Elementary Convex Argument} if there exists a straight line $\ell$ that passes through $\mathcal{U}_i[a]$ of slope $m_\ell$ such that $m(\mathcal{U}_{j_1}[b_1-1], \mathcal{U}_{j_1}[b_1]) \ge m_\ell \ge m(\mathcal{U}_{j_1}[b_1], \mathcal{U}_{j_1}[b_1+1]), \dots, m(\mathcal{U}_{j_t}[b_t-1], \mathcal{U}_{j_t}[b_t]) \ge m_\ell \ge m(\mathcal{U}_{j_t}[b_t], \mathcal{U}_{j_t}[b_t+1])$; $m(\mathcal{U}_i[a-1], \mathcal{U}_i[a]) \ge m_\ell \ge m(\mathcal{U}_i[a], \mathcal{U}_i[a+1])$; and the points $\mathcal{U}_{j_1}[b_1], \dots, \mathcal{U}_{j_t}[b_t]$ lie below $\ell$. \end{definition}
If $\langle \mathcal{U}_i[a] \dashv \mathcal{U}_{j_1}[b_1], \dots, \mathcal{U}_{j_t}[b_t] \rangle$ is an elementary convex argument, then the point $\mathcal{U}_i[a]$ is in the \textsc{Upper Hull} of the union of the upper hulls $\mathcal{U}_i, \mathcal{U}_{j_1}, \dots, \mathcal{U}_{j_t}$. \begin{LONG}
\begin{lemma}
An \emph{elementary convex argument} $\langle \mathcal{U}_i[a] \dashv \mathcal{U}_{j_1}[b_1], \dots, \mathcal{U}_{j_t}[b_t] \rangle$ can be checked in time within $O(t)$.
\end{lemma} \end{LONG} There are blocks that can be ``easily'' certified that form part of the output.
\begin{definition}\label{def:blockConvex}
Given the points $\mathcal{U}_i[a]$ and $\mathcal{U}_i[b]$, let $\ell$ be the straight line that passes through $\mathcal{U}_i[a]$ and $\mathcal{U}_i[b]$ and let $m_\ell$ be the slope of $\ell$. $\langle \mathcal{U}_i[a..b] \dashv \mathcal{U}_{j_1}[c_1..d_1..e_1],\begin{SHORT}\\\end{SHORT} \dots, \mathcal{U}_{j_t}[c_t..d_t..e_t] \rangle$ is a \emph{Block Convex Argument} if $\langle \mathcal{U}_i[a] \dashv \mathcal{U}_{j_1}[c_1], \dots, \mathcal{U}_{j_t}[c_t] \rangle$ and $\langle \mathcal{U}_i[b] \dashv \mathcal{U}_{j_1}[e_1], \dots, \mathcal{U}_{j_t}[e_t] \rangle$ are elementary convex arguments; $m(\mathcal{U}_{j_1}[d_1-1], \mathcal{U}_{j_1}[d_1]) \ge m_\ell \ge m(\mathcal{U}_{j_1}[d_1], \mathcal{U}_{j_1}[d_1+1]),\dots, m(\mathcal{U}_{j_t}[d_t-1], \mathcal{U}_{j_t}[d_t]) \ge m_\ell \ge m(\mathcal{U}_{j_t}[d_t], \mathcal{U}_{j_1}[d_t+1])$, and the points $\mathcal{U}_{j_1}[d_1], \dots, \mathcal{U}_{j_t}[d_t]$ lie below $\ell$. \end{definition}
If $\langle \mathcal{U}_i[a..b] \dashv \mathcal{U}_{j_1}[c_1..d_1..e_1], \dots, \mathcal{U}_{j_t}[c_t..d_t..e_t] \rangle$ is a block convex argument then the points in the block $\mathcal{U}_i[a..b]$ are in the \textsc{Upper Hull} of the union of the upper hulls $\mathcal{U}_i, \mathcal{U}_{j_1}, \dots, \mathcal{U}_{j_t}$.
\begin{LONG}
\begin{lemma}
A \emph{block convex argument} $\langle \mathcal{U}_i[a..b] \dashv \mathcal{U}_{j_1}[c_1..d_1..e_1], \dots, \mathcal{U}_{j_t}[c_t..d_t..e_t] \rangle$ can be checked in time within $O(t)$.
\end{lemma}
Similar to the \textsc{Merging Maxima}, the difficulty of finding and describing block eliminator and block convex arguments depend on the points they refer to in the upper hull sequences, a notion captured by ``argument points'': \end{LONG}
\begin{definition}
Given an argument $\langle \mathcal{U}_i[a], \mathcal{U}_j[b] \supset \mathcal{U}_{k_1}[c_1..d_1..e_1], \dots, \mathcal{U}_{k_t}[c_t..d_t..e_t] \rangle$ or $\langle \mathcal{U}_i[a..b] \dashv \mathcal{U}_{j_1}[c_1..d_1..e_1], \dots, \mathcal{U}_{j_t}[c_t..d_t..e_t] \rangle$, the \emph{Argument Points} are the points $\mathcal{U}_i[a]$ and $\mathcal{U}_i[b]$. \end{definition}
Those atomic arguments can be checked in time proportional to the number of blocks in them, and combine into a general definition of a certificate that any correct algorithm for \textsc{Merging Upper Hulls} in the algebraic decision tree computational model can be modified to output.
\begin{definition}
Given a set of upper hull sequences and their \textsc{Upper Hull} $\mathcal{U}$ expressed as several blocks on the upper hull sequences. A \emph{certificate} of $\mathcal{U}$ is a set of block eliminator and block convex arguments such that the \textsc{Upper Hull} of any instance satisfying those arguments is given by the description of $\mathcal{U}$. The length of a certificate is the number of distinct argument points in it. \end{definition}
Similarly to the \textsc{Merging Maxima}, the key of the analysis is to separate the doubling search steps from the other steps of the algorithm.
\begin{theorem}\label{theo:quh}
Given $\rho$ upper hull sequences. The time complexity of the \texttt{Quick Union Hull} algorithm is within $O(\sum_{j=1}^{\beta}\log{s_{j}} + \sum^{\delta}_{i=1}\log{\binom{\rho}{m_i}})$ when it computes the \textsc{Upper Hull} of the union of these upper hull sequences; where $\beta$ is the number of blocks in the certificate ${\cal C}$ computed by the algorithm; $s_1, \dots, s_\beta$ are the sizes of these blocks; $\delta$ is the length of ${\cal C}$; and $m_1, \dots, m_\delta$ is a sequence where $m_i$ is the number of upper hull sequences whose blocks form the $i$-th argument of ${\cal C}$. \end{theorem}
The optimality of this algorithm is a consequence of the fact that it checks each argument of any certificate using a constant number of argument points.
\begin{lemma}\label{lem:opt-hull}
The algorithm \texttt{Quick Union Hull} computes a certificate
which length is a constant factor of the length of the certificate of minimal length. \end{lemma}
In the following section we describe a synergistic results that combines the results of Sections~\ref{sec:inputOrderAdaptiveConvexHull} and~\ref{sec:UpperHullUnion}.
\subsection{Synergistic Computation of Upper Hulls} \label{sec:synergisticUpperHull}
The \texttt{(Simple,Structure) Synergistic Hull} algorithm proceeds in two phases. It first decomposes the input into simple subchains of consecutive positions using a (new) linear time doubling search inspired partitioning algorithm, that searches for simple chains from left to right (see \begin{LONG}Algorithm~\ref{alg:dsp}\end{LONG}\begin{SHORT}Appendix~\ref{app:simple}\end{SHORT} for a detail description of the algorithm). It computes their upper hull sequences, and then merges those using the \texttt{Quick Union Hull} algorithm described previously.
\begin{LONG}
\begin{algorithm}
\caption{\texttt{Doubling Search Partition}}
\label{alg:dsp}
\begin{algorithmic}[1]
\REQUIRE{A sequence of $n$ planar points $p_1, \dots, p_n$} \ENSURE{A sequence of simple polygonal chains}
\STATE Initialize $i$ to $1$;
\FOR{$t = 1, 2, \dots$ } \IF{$i+2^t > n$ \OR the chain $p_i, \dots, p_{i+2^t}$ is \NOT simple} \STATE {Add the chain $p_i, \dots, p_{i+2^{t-1}}$ to the output} \STATE {Update $i \leftarrow i+2^{t-1} + 1$} \STATE {Reset $t \leftarrow 1$}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
The \texttt{Doubling Search Partition} algorithm partitions the polygonal chain into simple subchains which sizes has asymptotically minimum entropy among all the partitions into simple subchains. The following lemma formalizes this fact.
\begin{lemma}
Given a sequence $S$ of $n$ planar points. The \texttt{Doubling Search Partition} algorithm computes in linear time a partition of $S$ into $k$ simple polygonal chain of consecutive points, of sizes $n_1, \dots, n_k$ such that $n(1+\mathcal{H}(n_1, \dots, n_{k})) \in O(n(1+\alpha))$, where $\alpha$ is the minimal entropy $\min\{\mathcal{H}(n_1, \dots, n_{\kappa})$ of any partition of $S$ into $\kappa$ simple subchains of consecutive positions, of respective sizes $n_1,\ldots,n_\kappa$, and $\mathcal{H}(n_1, \dots, n_\kappa) = \sum_{i=1}^\kappa{\frac{n_i}{n}}\log{\frac{n}{n_i}}\}$. \end{lemma}
The proof of this lemma is similar to the proof of Theorem~\ref{theo:simple} where the number of operations for each simple subchain of a partition into simple subchains is bounded separately. The following theorem summarizes the synergistic result in this section.
\end{LONG}
\begin{theorem}\label{theo:syn-hull}
Let $\mathcal{S}$ be a set of points in the plane such that ${\cal S}$ can be partitioned into $\kappa$ simple subchains. Let $h$ be the number of points in the \textsc{Upper Hull} of ${\cal S}$. There exists an algorithm which time complexity is within $O(n + \sum^{\delta}_{i=1}\log{\binom{\kappa}{m_i}}) \subseteq O(n\log(\min(\kappa, h)))$ when it computes the \textsc{Upper Hull} of $\mathcal{S}$, where $\delta$ is the length of the certificate ${\cal C}$ computed by the union algorithm; and $m_1, \dots, m_\delta$ is a sequence where $m_i$ is the number of upper hull sequences of the simple subchains whose blocks form the $i$-th argument of ${\cal C}$. This number of comparisons is optimal in the worst case over instances $\mathcal{S}$ formed by $\kappa$ simple subchains which \textsc{Upper Hulls} have certificates ${\cal C}$ of length $\delta$ such that $m_1, \dots, m_\delta$ is a sequence where $m_i$ is the number of upper hull of simple subchains whose blocks form the $i$-th argument of ${\cal C}$. \end{theorem}
\begin{VLONG}
Even though the algorithms are more complex, except for some details, the proofs of Theorems~\ref{theo:quh} and~\ref{theo:syn-hull} and Lemma~\ref{lem:opt-hull} of Section~\ref{sec:convex} are very similar to those described in the previous section.
We describe the intuition for the lower bound below: as for the computation of \textsc{Maxima Sets}, it is a simple adversary argument, based on the definition of a family of ``hard'' instances for each possible value of the parameters of the analysis, building over each other, but the combination of elementary instances requires a little bit of extra care.
First, we verify the lower bound for ``easy'' instances, of finite difficulty:
general instances formed by a single ($\kappa=1$) simple sequence obviously require $\Omega(n)$ comparisons (no correct algorithm can afford to ignore a single point of the input), while
general instances dominated by a single edge ($h=1)$ also require $\Omega(n)$ comparisons.
Each of these lower bound yields a distribution of instances, either decomposed into $\kappa=1$ simple chains or of output size $h=1$, such that any deterministic algorithm performs $\Omega(n)$ comparisons on average on a uniform distribution of those instances.
Such distributions of ``elementary'' instances can be duplicated so that to produce various distributions of elementary instances; and can be combined so that to define a distribution of harder instances.
\begin{lemma}
Given the positive integers $n,\kappa, \beta, s_1, \ldots, s_\beta, \delta, m_1, \ldots, m_\delta$,
there is a family of instances which can each be partitioned into $\kappa$ simple subchains such that,
\begin{itemize}
\item $\beta$ and $s_1, \ldots, s_\beta$ are the number and sizes of the blocks in the certificate ${\cal C}$ computed by the union algorithm, respectively;
\item $\delta$ is the length of ${\cal C}$;
\item $m_1, \ldots, m_\delta$ is a sequence where $m_i$ is the number of upper hulls of the simple subchains whose blocks form the $i$-th argument of ${\cal C}$; and
\item on average on a uniform distribution of these instances, any algorithm computing the \textsc{Upper Hull} of $S$ in the comparison model performs within $\Omega(n + \sum^{\delta}_{i=1}\log{\binom{\kappa}{m_i}})$ comparisons.
\end{itemize} \end{lemma}
Finally, any such distribution with a computational lower bound on average yields a computational lower bound for the worst case instance complexity of any randomized algorithm, on average on its randomness; and as a particular case a lower bound on the worst case complexity of any deterministic algorithm:
\begin{corollary}
Given the positive integers $\kappa, \beta, s_1, \ldots, s_\beta, \delta, m_1, \ldots, m_\delta$, and an algorithm $A$ computing the \textsc{Upper Hull} of a sequence of $n$ planar points in the algebraic decision tree computational model (whether deterministic or randomized), there is an instance $I$ such that $A$ performs a number of comparisons within $\Omega(n + \sum^{\delta}_{i=1}\log{\binom{\kappa}{m_i}})$ when it computes the \textsc{Upper Hull} of $I$. \end{corollary} \begin{proof}
\begin{LONG}
A direct application of Yao's minimax principle \cite{1958-PJM-OnGeneralMinimaxTheorems-Sion,1977-FOCS-ProbabilisticComputationsTowardAUnifiedMeasureOfComplexity-Yao,1944-BOOK-TheoryOfGamesAndEconomicBehavior-VonNeumannMorgenstern}. \qed
\end{LONG}
\begin{SHORT}
A direct application of Yao's minimax principle \cite{1958-PJM-OnGeneralMinimaxTheorems-Sion}. \qed
\end{SHORT} \end{proof} \end{VLONG}
\begin{LONG}
This concludes the description of our synergistic results. In the next section, we discuss the issues left open for improvement. \end{LONG}
\begin{LONG}
\section{Discussion}
\label{sec:discussion}
Considering the computation of the \textsc{Maxima Set} and of the \textsc{Convex Hull}, we have built upon previous results taking advantage either of some notions of input order or of some notions of input structure, to describe solutions which take advantage of both in a synergistic way. There are many ways in which those results can be improved further: we list only a selection here.
\begin{INUTILE}
First, Afshani et al.~\cite{2009-FOCS-InstanceOptimalGeometricAlgorithms-AfshaniBarbayChan} refined Kirkpatrick and Seidel's input structure adaptive results~\cite{1985-SOCG-OutputSizeSensitiveAlgorithmsForFindingMaximalVectors-KirkpatrickSeidel} for both the computation of the \textsc{Maxima Set} and of the \textsc{Convex Hull}: even those their solution is not of practical use (because of high constant factors), it would be interesting to obtain a synergistic solution which outperforms theirs. \end{INUTILE}
In the same line of thought, Ahn and Okamoto~\cite{2011-IEICE-AdaptiveAlgorithmsForPlanarConvexHullProblems-AhnOkamoto} described some other notion of input order than the one we considered here, which can potentially yield another synergistic solution in combination with a given notion of input structure. This is true for any of the many notions of input order which could be adapted from \textsc{Sorting}~\cite{1992-ACJ-AnOverviewOfAdaptiveSorting-MoffatPetersson}.
Whereas being adaptive to as many measures of difficulty as possible at once is a worthy goal in theory, it usually comes at a price of an increase in the constant factor of the running time of the algorithm: it will become important to measure, for the various practical applications of each problem, which measures of difficulty take low value in practice. It will be necessary to do some more theoretical work to identify what to look for in the practical applications, but then it will be important to measure the practical difficulties of the instances.
\textbf{Acknowledgments:}
The authors would like to thank Javiel Rojas for helping with the bibliography on the computation of the \textsc{Maxima Set} of a set of points. \end{LONG}
\end{document} | arXiv |
Article | Open | Published: 21 June 2018
Predictive modeling of battery degradation and greenhouse gas emissions from U.S. state-level electric vehicle operation
Fan Yang1 na1,
Yuanyuan Xie2 na1,
Yelin Deng3 &
Chris Yuan ORCID: orcid.org/0000-0002-7744-72521
Nature Communicationsvolume 9, Article number: 2429 (2018) | Download Citation
Energy and behaviour
Projection and prediction
Electric vehicles (EVs) are widely promoted as clean alternatives to conventional vehicles for reducing greenhouse gas (GHG) emissions from ground transportation. However, the battery undergoes a sophisticated degradation process during EV operations and its effects on EV energy consumption and GHG emissions are unknown. Here we show on a typical 24 kWh lithium-manganese-oxide–graphite battery pack that the degradation of EV battery can be mathematically modeled to predict battery life and to study its effects on energy consumption and GHG emissions from EV operations. We found that under US state-level average driving conditions, the battery life is ranging between 5.2 years in Florida and 13.3 years in Alaska under 30% battery degradation limit. The battery degradation will cause a 11.5–16.2% increase in energy consumption and GHG emissions per km driven at 30% capacity loss. This study provides a robust analytical approach and results for supporting policy making in prioritizing EV deployment in the U.S.
The fossil fuel combustion in transportation sector generates 25.8% of total greenhouse gases (GHG) emissions in the U.S.1 To mitigate the impacts of ground transportation on climate change, the US Environmental Protection Agency along with the US National Highway Traffic Safety Administration has set a regulatory standard to reduce the average GHG emissions of US fleet passenger cars from 139.8 g km−1 in 2016 base level to 88.8 g km−1 in 20252.
Electric vehicles (EVs) are widely promoted as clean alternatives to conventional vehicles for reducing GHG emissions from ground transportation. The US federal and many state governments are providing a variety of financial and operating incentives including tax credit, fast lane access, emission test exemption, etc., to promote EV adoption3, 4. It is expected that EVs will share 24% of US light-vehicle fleet in 20305.
Current EVs are predominantly powered by lithium ion batteries which undergo a complex degradation process during actual EV operation, dictating the energy storage and generating indirect GHG emissions from the consumed electricity. The electricity consumption and associated GHG emissions from EV operations are determined by EV operating conditions and battery charging/discharging processes. In recent years, some research has been conducted on investigating such operating factors as travel demand6, 7, electricity mix8,9,10, operating pattern10, 11, and ambient temperature12, 13 on electricity consumption and GHG emissions from EV operation, while no study has been conducted considering the battery degradation under EV actual driving conditions in the analyses of the electricity consumption and GHG emissions. In current studies on energy and GHG analysis, the EV batteries are simply assumed to have the same lifetime as the vehicles6,7,8,9,10,11,12,13, or consider battery replacement at certain cut-off mileage14, 15. But in actual EV operation, battery degradation is gradually happening along time under specific driving conditions, and the battery degradation affects the EV electricity consumption and GHG emissions in three ways: decreasing driving range due to reduced capacity, decreasing charging/discharging efficiency due to increasing resistance, requiring battery replacement when the capacity is dropped to the battery degradation limit16.
In general, EV battery degradation undergoes two processes: one is the cycling capacity loss due to the internal solid-electrolyte interphase (SEI) layer growth, structure degradation of the electrodes and cyclable lithium loss during the battery charging/discharging process, as mainly dictated by the number of battery charging/discharging cycles; the other is the calendar capacity loss due to battery self-discharge and side reactions during energy storage period, as mainly determined by the state of charge, aging time, and ambient temperature, particularly the high temperatures to which the battery is exposed16,17,18. Due to the largely different operating conditions across the U.S., the EV battery degradation, electricity consumptions, and GHG emissions in each state are largely different.
A predictive analysis of the battery degradation and its effects on energy consumption and GHG emissions from US state-level EV operation is currently unavailable. Here we report a comprehensive and robust analytical approach for quantifying the battery degradation and its effects on energy consumptions and GHG emissions from a mid-size all-battery EV under the average driving conditions in each state of US. From this study, we found that the EV battery degradation is largely different from year to year in each US state. For the annual battery degradation, the calendar capacity loss contributes more to the total capacity loss than the cycling capacity loss. The battery degradation can largely increase the energy consumption and GHG emissions of EV per km driven. These findings from this study can be useful in supporting strategy planning and policy making on sustainable EV deployment across the U.S. in future.
Electric vehicle battery degradation under actual operation
The lithium ion battery analyzed in this study is the lithium-manganese oxide (LMO)–graphite battery which is commonly used in EVs, such as Nissan Leaf and Chevrolet Volt. Based on current practice, the average battery cell voltage in this study is set at 3.7 V and each cell operates between 3.4 and 4.1 V. The battery pack consists of 192 battery cells and has an initial 24.15 kWh capacity with 76.7% accessible19. A forced convective air cooling condition is simulated (h = 25 W m−2 K−1, fitted from the experimental data of the forced convective air-cooling system20, 21) for the battery pack cooling in EV operation. To represent the fresh cell status on a new EV, the initial State of Charge of the battery LMO cathode and graphite anode are set at 0.99 and 0.01, respectively.
Here we developed a comprehensive battery degradation model for the LMO–graphite battery, integrating both the cycling and calendar capacity loss under average driving conditions for a battery EV in each US State for analyses of electricity consumption and GHG emissions (see Methods for details). We developed the cycling capacity loss model based on our previous multi-physics electrochemical model integrating the porous electrode theory, transport phenomena, SEI layer formation, and chemical/electrochemical kinetics22, 23. We calculated the calendar capacity loss using a modified Arrhenius-form empirical equation which was established from experimental data correlation and in our analysis is modified to correlate from the hourly timescale16, 17. The developed models are validated with actual data reported in literature24.
In this study, the battery cycling capacity loss and calendar capacity loss are first calculated separately for the EV under the average driving conditions in each US state, using a monthly–hourly timescale of ambient temperature and separated travel demands for local and highway driving conditions, respectively. The calculated cycling capacity loss and calendar capacity loss are then combined to obtain the annual capacity loss in each state. In the calculation, the driving factors for battery degradation include the annual charging/discharging cycle number which is dependent on the annual travel demand and the driving range of EVs, variations of discharging rates relative to the power outputs required from the battery pack under different driving speeds of EV, as well as the varying temperatures to which the battery is exposed all year round which affects the battery internal kinetics and battery efficiency significantly.
The monthly–hourly travel demands of vehicles in the U.S. are calculated based on the monthly traffic volume data of all registered vehicles in each state, the statistical hourly travel frequency of all surveyed vehicles in a day, and the driving pattern on the percentage of highway vs. local driving in each state (Supplementary Data 3–4). As current EVs are not able to cover the same travel demand as conventional vehicles, in this analysis the travel demand of EVs in each state is proportioned based on the ratio of the vehicle-travel-miles within the driving range of the EVs to the total vehicle-travel-miles. The initial ratio is determined at 71.6% in Alaska to 76.8% in Hawaii, falling between 70.7% and 75.1% in the second year and thereafter corresponding to the battery capacity fading (Supplementary Data 2). The value of the ratio is obtained based on the statistical data compiled by US Federal Highway Administration (FHA)25.
The driving ranges of a mid-sized EV with a 24 kWh LMO–graphite battery, as reported from 2013 Nissan Leaf on their actual driving in the U.S., are ranging between 64 and 193 km under different driving patterns and temperatures (Fig. 1b), which significantly affect the battery cycling capacity loss and associated GHG emissions during EV driving based on the largely different GHG emission factors (CO2,eq k Wh−1) across the U.S. Figure 1 shows the state-level annual travel demand (Fig. 1a), the temperature-dependent driving range of the EV (Fig. 1b), and the monthly–hourly temperature (Fig. 1c) of each US state.
Average state-level operating conditions and initial driving ranges for electric vehicles in the US. a Annual travel demand of each state ranging from 9399 km in Alaska to 29,871 km in Mississippi25, drawn with Plotly46. b Initial driving ranges of Nissan Leaf under different ambient temperatures and driving patterns30, 31; Rlocal and Rhighway are the driving range of electric vehicle under local and highway driving conditions, respectively, the fitted driving range equations are shown in Eqs. (3) and (4) in Methods section. c Monthly–hourly average temperature in each US state during 1981–2010 period, ranging between −15 and 35 °C39
To calculate the cycling capacity loss, the EVs are assumed to cover all those travel demands within the EVs' actual driving range in each state, based on the statistically compiled vehicle-travel-miles of functional systems in each state25. In this analysis, the battery charging/discharging cycle numbers are calculated separately for local and highway driving first, and are then combined based on the driving pattern of the EV in each state of U.S. The cycle numbers are calculated using the proportioned local and highway travel demands of EVs divided by their corresponding driving ranges under the specific ambient temperature in each state.
The EV driving ranges under the monthly–hourly ambient temperatures are modeled and calculated based on the measured data by Argonne National Lab for highway and local driving of 2013 Nissan Leaf with a 24 kWh LMO–graphite battery pack (Fig. 1b)26. The EV driving range at 22 °C is 178 and 149 km for local and highway driving, respectively, which can drop by 57% and 40% when ambient temperature goes down to −18 °C, and can drop by 27% and 10% when the temperature goes up to 35 °C26. Based on the actual practice, the battery charging current density is set at 0.25 C in this analysis. The real-time discharging rates of the battery pack are determined from the required power outputs of Nissan Leaf, as measured by Argonne National Lab at various driving speeds26 and correlated to the typical highway fuel economy test (HWFET) for highway driving and the urban dynamometer driving schedule (UDDS) for local city driving27 under different ambient temperatures.
As calculated, the annual cycling capacity losses are between 0.4% in Hawaii and 1.2% in Mississippi in first year, and slightly going up to a range between 0.7% and 1.9% thereafter (Supplementary Data 15). The annual calendar capacity losses for the battery are calculated based on the monthly–hourly temperature in each state and the aging time during the EV battery life, ranging between 4.4% in Alaska and 9.6% in Hawaii in first year, and falling down to a range between 1.0% and 2.2% thereafter (Supplementary Data 16). The cycling capacity loss is dictated by the annual travel demand, with Alaska the smallest at 9399 km and Mississippi the largest at 29,871 km, while the calendar capacity loss is mainly governed by the ambient temperature, with Alaska the lowest average at −2.7 °C and Hawaii the highest average at 24 °C all year round.
The annual battery capacity losses, combining both cycling and calendar capacity loss each year under the actual driving conditions for each state during the first 5 years, are presented in Fig. 2. The battery capacity losses of EVs in each state are different from year to year. The total capacity loss in first year is larger than those in the following years mainly because of the exponential nature of calendar capacity loss resulting from the formation of SEI layer and the reduction of cyclable lithium ion concentrations in the first year of battery operation16,17,18. As calculated, the 1st year total capacity loss of the battery is between 4.9% in Alaska and 10.1% in Hawaii. From the 2nd year, the cycling capacity loss takes an increasing share in the total capacity loss because the calendar capacity loss is decreasing while the cycling capacity loss is relatively stable. As the EV battery degradation limit is currently agreed upon 30%28, 29, the EV battery life is calculated ranging between 5.2 years in Florida and 13.3 years in Alaska under current EV driving conditions in each state (Supplementary Figure 3). One battery replacement will be needed for the EV operation in most states, except Alaska and Montana in which a single battery pack can power the EV during the designed 10-year service life.
Electric vehicle battery degradation and capacity loss in each state under actual operating conditions. a Top five states, bottom five states and the US average of battery cycling number needed for highway driving in first year to meet the annual travel demand. b Top five states, bottom five states and the US average of battery cycling number needed for local driving in first year to meet the annual travel demand. c Top five and bottom five states for battery annual cycling capacity loss throughout battery life under actual driving conditions, ranging between 0.4% in Hawaii and 1.6% in Mississippi. d Top five and bottom five states for battery annual calendar capacity loss during the battery life under each state's ambient temperature and aging time, ranging between 9.6% in Hawaii and 1% in Alaska. e Annual and total battery capacity loss during 5-years operation in each US state under average driving conditions, ranging between 15% in Alaska and 28.7% in Florida. Error bars show the ranges of the battery capacity loss under varying ambient temperatures, travel demands and driving patterns in each state. AL: Alabama, AK: Alaska, AZ: Arizona, AR: Arkansas, CA: California, CO: Colorado, CT: Connecticut, DE: Delaware, FL: Florida, GA: Georgia, HI: Hawaii, ID: Idaho, IL: Illinois, IN: Indiana, IA: Iowa, KS: Kansas, KY: Kentucky, LA: Louisiana, ME: Maine, MD: Maryland, MA: Massachusetts, MI: Michigan, MN: Minnesota, MS: Mississippi, MO: Missouri, MT: Montana, NE: Nebraska, NV: Nevada, NH: New Hampshire, NJ: New Jersey, NM: New Mexico, NY: New York, NC: North Carolina, ND: North Dakota, OH: Ohio, OK: Oklahoma, OR: Oregon, PA: Pennsylvania, RI: Rhode Island, SC: South Carolina, SD: South Dakota, TN: Tennessee, TX: Texas, UT: Utah, VT: Vermont, VA: Virginia, WA: Washington, WV: West Virginia, WI: Wisconsin, WY: Wyoming
To validate the developed models and results, the calculated capacity loss values are benchmarked with the measured data on Nissan Leaf for both the calendar capacity loss and total capacity loss, respectively. As shown in Supplementary Table 3, our calculated results for the battery calendar loss after 5 years match the published battery calendar loss very well in Minneapolis, Houston and Phoenix, as reported by National Renewable Energy Laboratory30, only with 0.9–1.4% difference. The total capacity loss data as calculated in our study is validated with the actually collected "Plug in America Survey Data" on Nissan Leaf operating under three average high temperatures24 (Supplementary Figure 2). The actually reported Nissan Leaf capacity loss data and our calculated total capacity loss values match reasonably well, with the maximum deviations only between 2.9–6.2%, which could be attributed to the differences of battery performance between the averaged and actual operating conditions of the EV, including travel demand, ambient temperature, driving pattern, and travel frequency.
Energy consumption and GHG emissions
The battery capacity loss determines battery life and correspondingly affects the energy consumption of battery pack during electric vehicle driving, which dictates the amount of GHG emissions with state-level variations. In this study, the amount of EV energy consumption is calculated as the amount of electricity drawn from the wall charger for powering the EV to meet the annual travel demand under the average driving conditions in each US state based on the driving range model established on the basis of experimental data from Argonne National Lab26 and Fleetcarma31 (Fig. 1b). The amount of electricity consumption is calculated with Eq. (7), which considers: actual amount of energy stored in the battery after the annual capacity loss, energy loss on the battery resistance during charging and discharging process, and energy loss from the charger & EV Supply Equipment (EVSE). In this analysis, the charger and EVSE efficiency is set at 85.3% based on Argonne testing data19. The battery charging and discharging efficiency, resulting from the battery degradation due to the increasing battery resistance32, is calculated using the battery resistance models published in ref. 33. The calculated initial charging–discharging efficiency of the EV battery is 98% which decreases at different rates annually in different states. The charging–discharging efficiency drops to 80% in Hawaii in 5th year and 79% in Maine in 9th year (Fig. 3a, Supplementary Data 8).
Energy consumption and greenhouse gas emissions from a mid-size electric vehicle battery. a Top five and bottom five states on the decreasing rate of battery charging–discharging efficiency, ranging from 98 to 77% during the battery life. b Top five and bottom five states on the increasing rate of unit energy consumption, ranging from 100 to 127% upon 30% capacity loss. c Top five and bottom five states on unit energy consumption of local driving, ranging between 140 kWh km−1 in Hawaii and 207 kWh km−1 in Alaska. d Top five and bottom five states on energy consumption per km highway driving, ranging between 154 kWh km−1 in Hawaii and 205 kWh km−1 in Alaska. e Unit GHG emissions per km driven and annual total GHG emissions from EV operations in each state, with the unit GHG emissions ranging between 0.6 g CO2,eq km−1 in Vermont and 167 g CO2,eq km−1 in Wyoming, and the annual total GHG emissions ranging between 8.5 kg in Vermont and 2570.9 kg in Indiana. AL: Alabama, AK: Alaska, AZ: Arizona, AR: Arkansas, CA: California, CO: Colorado, CT: Connecticut, DE: Delaware, FL: Florida, GA: Georgia, HI: Hawaii, ID: Idaho, IL: Illinois, IN: Indiana, IA: Iowa, KS: Kansas, KY: Kentucky, LA: Louisiana, ME: Maine, MD: Maryland, MA: Massachusetts, MI: Michigan, MN: Minnesota, MS: Mississippi, MO: Missouri, MT: Montana, NE: Nebraska, NV: Nevada, NH: New Hampshire, NJ: New Jersey, NM: New Mexico, NY: New York, NC: North Carolina, ND: North Dakota, OH: Ohio, OK: Oklahoma, OR: Oregon, PA: Pennsylvania, RI: Rhode Island, SC: South Carolina, SD: South Dakota, TN: Tennessee, TX: Texas, UT: Utah, VT: Vermont, VA: Virginia, WA: Washington, WV: West Virginia, WI: Wisconsin, WY: Wyoming
The battery degradation affects the energy consumption and GHG emissions from EV operations significantly. The unit energy consumption is different from state to state because of the different driving conditions, and is increasing from year to year in each state due to the battery degradation (Fig. 3b). In this analysis, the unit energy consumptions of the EV during highway and local driving are separately calculated (Fig. 3c, d), and then combined together based on the driving pattern of the vehicle in each state. As calculated, the initial energy consumption of the EV operation is ranging between 120.3 kWh km−1 in Hawaii and 176.5 kWh km−1 in Alaska, corresponding to 80.7 and 87.2 g km−1 CO2,eq emissions, based on the GHG emission factors determined by the electricity fuel mix and the imports of electricity in each state using the model from ref. 34. At 30% capacity loss, the energy consumption will be increased to 150.2 kWh km−1 in Hawaii and 214.8 kWh km−1 in Alaska, corresponding to 100.8 and 106.2 g km−1 CO2,eq emissions. In general, the energy consumption and GHG emissions from EV operations in the U.S. are increasing by 11.5–16.2% at the recommended 30% battery degradation limit (Supplementary Data 18 and 19). If the EV continues to operate after 30% capacity loss, the energy consumptions and GHG emissions will be largely increased, for instance, by 28% in Mississippi after 10 years driving.
To support strategy planning and policy making on sustainable deployment of EVs, the averaged unit GHG emissions over a single battery life (within 30% capacity loss) and the annual total GHG emissions from the EV driven in each US state are provided in Fig. 3e. On average, the unit GHG emissions from the EV range from 0.6 g km−1 in Vermont to 167.1 g km−1 in Wyoming, while the annual total GHG emissions are between 8.5 kg in Vermont and 2570.9 kg in Indiana.
In this paper, we report a comprehensive analytical approach for determining battery degradation and its effects on energy consumption and GHG emissions from a mid-size battery EV under the average driving conditions in each state of U.S., using a novel battery degradation model validated with measured data on a 24 kWh LMO–graphite battery pack, to support strategy planning and policy making for sustainable EV deployment in the U.S. It is found that the battery life in each state is quite different under current EV driving conditions, ranging from 5.2 years in Florida to 13.3 years in Alaska. The annual battery degradation of EVs is mainly dependent on the annual travel demand and the ambient high temperature the battery is exposed to. In general, those states with a high annual travel demand above 18,000 km and a high ambient temperature above 28 °C in summer have more severe capacity losses. The temperature-induced calendar loss is dominating the battery degradation, particularly in first year.
The battery degradation causes gradual increasing of battery internal resistance and decreasing of battery charging/discharging efficiency, which results in increasing of unit energy consumption and GHG emissions during EV operations. The energy consumption and GHG emissions can be increased by 11.5–16.2% at the recommended 30% degradation limit for battery replacement, and up to 28% after 10 years driving in the U.S. As EVs are widely promoted as clean alternatives to replace conventional vehicles to reduce GHG emissions from ground transportation sector, the increasing of energy consumption and GHG emissions from battery degradation needs to be considered in the strategy planning and policy making on EV incentives and promotions. Those states with large GHG emission reductions should be provided with enhanced incentives for promoting more EV deployment, while those states with small or no GHG emission reductions should be provided with less or no incentives for EV deployment. Besides, the battery degradation will also lead to required battery replacement, which will add 88.9 GJ equivalent of energy and 5760 kg CO2,eq GHG emissions (Supplementary Figures 4, 5) based on a cradle-to-gate analysis of a 24 kWh LMO–graphite battery pack35,36,37.
A sensitivity analysis is performed on the following four factors: travel demand, electricity fuel mix, battery degradation limit for replacement, and battery capacity accessible ratio (Supplementary Figure 6). The sensitivity analysis reveals that unit GHG emissions per battery are insensitive to annual travel demand and battery capacity accessible ratio. The increasing of annual travel demand from 80 to 120% and battery capacity accessible ratio from 60 to 80% cause only 1% fluctuations in all the states. The increase of battery accessible ratio from 60 to 80% can change the GHG emissions from 97 to 103% of the baseline scenario. On the other hand, reducing the electricity GHG emission factors by adopting clean electricity generation technologies could decrease the unit GHG emissions proportionally.
These results provide fundamental insights how the battery degradation affects the energy consumption and GHG emissions from electric vehicles in different states, adding to the regularities that have previously been identified that the energy use and environmental performance of electric vehicles have significantly regional variability. The modeling approach and results in this paper could be applied by the EV battery designers to evaluate and improve the EV battery performance under different operation conditions by optimizing such battery parameters as total battery capacity, accessible ratio of capacity, rate and depth of charge and discharge, etc. Furthermore, the EV battery manufacturers can apply vehicle-specific strategies and technologies to extend the battery life and improve the vehicle performance. For instance, the battery life can be largely extended if a temperature control system can be applied during the non-operating period of EV, particularly where the ambient temperature is above 28 °C, in such states as Hawaii and Florida. Moreover, the study can enhance the technical services of EV battery manufacturers in such aspects as optimizing the scheduling of battery replacement, inventory planning and control of the battery supply, by providing the accurate degradation data from actual EV operations. From policy perspective, this study provides an accurate modeling approach and results on the battery life, energy consumption and GHG emissions from EVs in each state of U.S., which can be directly used in the US national statistics of energy consumption and GHG emissions from transportation sector. These data and results could be used to support policy making in the electric vehicle incentives to reduce the energy consumption and GHG emissions from the transportation sector more efficiently with specific electric vehicle technologies and the varying state-level operation conditions.
It must be noted that this study is limited to an EV with a 24 kWh LMO–graphite battery pack. Different sizes and chemistries of the battery pack may affect the final results of the study, which could be investigated in future using a modified version of this modeling approach. Also, this study is conducted based on the average state-level data of U.S. Although the uncertainty analysis and the sensitivity analysis investigated the viability of this study to some degree, the possible impacts due to extreme conditions could be significant and needs to be investigated in more details in future. Besides, some assumptions made in this study may also affect the final results. In this study, the driving pattern of the EV on the highway and local operations are modeled based on the US EPA's HWFET and UDDS driving data. As the driving speeds dictate the discharging profile of battery pack during EV operations, a fast-changing driving pattern could induce more degradations in the battery pack and cause more energy consumption and GHG emissions from the EV on a unit driving distance. The GHG emission factor of marginal electricity mix can also vary at different charging time which needs to be taken into accounts in future studies. The advancement of battery and vehicle technologies may affect the results of the study as well. The increasing of energy density and power density of the battery pack will reduce the energy consumption and GHG emissions from the EV on a unit driving distance. A more-efficient electrified powertrain system will also reduce the unit energy consumption and GHG emissions from the EV. However, fast-charging technologies could induce extra degradation in the battery pack which will increase the unit energy consumption and GHG emissions from the EV if being used on a regular basis.
State-level travel demand of EV in the US
The annual travel demand is the mileage traveled per vehicle in a year. To obtain the EV annual travel demand, we proportioned the traditional vehicle annual travel distance and driving patterns in each state to simulate EV driving in the U.S. These data can be referred to the Highway Statistics Series published by FHA25 and have been listed in Supplementary Data 1. In this study, our research target is mid-size EVs, thus we introduce an EV travel demand ratio (\(n_r\)) to cover those travels within the driving range of EV battery pack: \(n_r = m_r/M_r\), where \(m_r\) stands for the sum of mileages for all trips within the EV driving range (per charge), which is calculated annually by summarizing the covered one-way trips within the actual EV driving range, with battery degradation and ambient temperature effects considered; \(M_r\) is the sum of the mileages of all trips traveled by conventional vehicle. It should be noted here that EV travel demand ratio may also be affected by the fast charging capacity of EV batteries, for instance, the Tesla's super charger technologies, which could extend the actual driving distance of EV during its service life. The travel demand and driving pattern data in the US are the latest National household travel survey (NHTS) data25, 38. The calculated detailed \(n_r\) of each state is listed in Supplementary Data 2. Based on the state-level temperature variation and driving condition change3, the monthly average travel demand of EV,\(D_{m,r}\), is calculated by
$$D_{m,r} = \frac{{V_{m,r}}}{{n_v}} \times n_r$$
where \(V_{m,r}\) is the monthly average travel volume (km), \(n_v\) is the number of registered vehicles, and \(n_r\) is the EV travel demand ratio, subscript r represents driving pattern (local vs. highway), subscript m represents month.
Travel frequency of EVs in the US
The daily travel frequency of vehicle is the data reported by NHTS38. We employ the statistical travel time and duration data in the US, and divide them by the total vehicle traveling time to obtain the hourly travel frequency, as illustrated in Supplementary Figure 1. Using this 24 h travel frequency (f h ) of US vehicles, the monthly hourly travel demand (D m,h,r , subscript h stands for hour) of EVs is obtained by
$$D_{m,h,r} = \left[ {\begin{array}{*{20}{c}} {\frac{{n_rV_{1,r}}}{{n_{\mathrm v}}}} \\ \vdots \\ {\frac{{n_rV_{m,r}}}{{n_{\mathrm v}}}} \\ \vdots \\ {\frac{{n_rV_{12,r}}}{{n_{\mathrm v}}}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {f_1} & \cdots & {f_h} & \cdots & {f_{24}} \end{array}} \right]$$
The obtained travel demand of EV is provided in Supplementary Data 3 and 4.
EV driving range and energy consumption in the US
The driving range of EVs in the US is largely dependent on the EV driving conditions. In this study, the actual testing data of Nissan Leaf from Argonne National Lab26 (as shown in Fig. 1b) is fitted to calculate the EV driving range on local and highway under various temperatures, which matches well with the actual driving range data of 2013 and 2014 Nissan Leaf models collected by FleetCarma31 under best and average conditions.
$$\begin{array}{*{20}{c}} {R_{\rm{local}}} & = & { - 1.1826 \times 10^{ - 4} \times T^4 + 3.75428 \times 10^{ - 5} \times T^3} \\ {} & {} & { + 0.0870367 \times T^2 + 2.83858 \times T + 111.542} \end{array}$$
$$\begin{array}{*{20}{c}} {R_{\rm{highway}}} & = & { - 1.68942 \times 10^{ - 5} \times T^4 - 4.50513 \times 10^{ - 4} \times T^3} \\ {} & {} & { - 0.0330376 \times T^2 + 1.95879 \times T + 116.135} \end{array}$$
where Rlocal and Rhighway are the driving range of the Nissan Leaf under local and highway driving conditions, respectively. T is temperature (°C).
The state-level monthly hourly EV charge–discharge cycles (listed in Supplementary Data 5 and 6) then are calculated using the National Oceanic and Atmospheric Administration (NOAA) data on the US monthly hourly local temperature distribution39:
$$C_{m,h,r} = \frac{{D_{m,h,r}}}{{R_r(T)}},\,T = \left[ {\begin{array}{*{20}{c}} {T_{1,1}} & \cdots & {T_{1,12}} \\ \vdots & {} & \vdots \\ {T_{24,1}} & \cdots & {T_{m,h}} \end{array}} \right]$$
where \(R_r(T)\) is the temperature dependent EV driving range, which represents different load conditions needed by EV sub-systems (e.g., HVAC, radio, etc.) and vehicle internal losses (e.g., alteration of battery and transmission efficiency caused by temperature), and \(R_r(T) = R_{\rm{local}}(T)\ {\rm{or }}\ R_{\rm{highway}}(T)\) as given in Fig. 1b, \(T\) is monthly hourly temperature (°C). The low driving range of the EV under a low temperature is mainly due to the heater use which is improving over time.
The annual EV charge–discharge cycle numbers (\(C\)) and energy consumption (\(E_t\)) are calculated by
$$C = \mathop {\sum}\nolimits_{m = 1}^{12} {\mathop {\sum}\nolimits_{h = 1}^{24} {\mathop {\sum}\nolimits_r {C_{m,h,r}} } } ,r = {\rm{local}},\,{\rm{highway}}$$
$$E_t = \frac{{C \times E_c}}{{\zeta \times \beta }}$$
where \(E_c\) is the energy consumption per charge19 (kWh), \(\zeta\) is the charger and EVSE efficiency19, and \(\beta\) is the battery charging-discharging efficiency due to the increase of battery resistance32, 33 as calculated by
$${\mathrm{\beta = }}\left( {\frac{3}{{\mathrm{2}}} - \frac{{\mathrm{1}}}{{\mathrm{2}}}{\mathrm{ }}\sqrt {1 + \frac{{{{4R}}_{{\mathrm{in}},c}P}}{{\varphi _{\rm{ocv}}^2}}} } \right) \times \left( {\frac{{\mathrm{1}}}{{\mathrm{2}}}{\mathrm{ + }}\frac{{\mathrm{1}}}{{\mathrm{2}}}{\mathrm{ }}\sqrt {1 - \frac{{{{4R}}_{{\mathrm{in}},d}P}}{{\varphi _{\rm{ocv}}^2}}} } \right)$$
where \(R_{\rm{in}},c\) and \(R_{\rm{in}},d\) are the charging and discharging internal resistance, P is the battery power, and \(\varphi _{\rm{ocv}}\) is the open circuit voltage. All these calculated data are provided and listed in Supplementary information (Supplementary Data 7–9). Supplementary Data 10 lists the annual energy consumption from EVs in each state during the battery life period in each state.
EV operating temperature in the US
In this study, the state-level ambient temperature is the average monthly hourly temperature data from the NOAA's report which is collected from 1981 to 2010 in the U.S.39. The detailed temperature data has been summarized and listed in the supplementary Data 11–14.
EV battery life model
The lithium ion batteries on board of EVs undergo both cycling capacity loss40 and calendar capacity loss17. In order to precisely calculate the battery life in each state of the US, a comprehensive battery capacity loss model is developed as shown below:
(1) Cycling capacity loss: Cycling capacity loss takes place during the EV charge–discharge cycles. In a LMO–graphite battery, the cycling capacity loss is mainly induced by SEI film growth, electrolyte decomposition and active materials loss. Based on the battery structure: two working electrodes and a separator layer with electrolyte, a pseudo two-dimensional battery capacity fading model is developed and published in our previous work, in which the charge transport process in the battery is formulated by classical Bulter–Volmer equation23, 41
$$i_F = i_j^0FS_j\left\{ {\exp (\frac{{z\alpha F}}{{RT}}\eta _j) - \exp ( - \frac{{z\alpha F}}{{RT}}\eta _j)} \right\},\,j = {\rm{neg,pos}}$$
where \(i_j^0\) is the exchange current density, \(F\) is Faraday's constant, \(S_j\) is the specific active interfacial area, \(z\) is the transferred electron number, \(\alpha\) is the charge transfer coefficient, \(R\) is the ideal gas constant, \(T\) is the ambient temperature, \(\eta _j\) is the overpotential42, neg,pos represent negative carbon electrode and positive LMO electrode respectively.
Considering the impacts of side reactions, the total current transferred can be expressed by
$$i_j^{\rm{tot}} = i_F + i_j^s,\,j = {\rm{neg}},\,{\rm {pos}}$$
where \(i_j^s\) is the side reaction current, \(i_j^s = - i_s^0\exp (z\alpha F\eta _s/RT)\). Meanwhile, the resistance rising induced by excessive SEI growth in negative carbon electrode can be formulated by23
$$R_f = R_{f,{\rm{ini}}} + R_f(t),\,R_f(t) = L(t){\mathrm{/}}\kappa _p,\,\frac{{\partial L(t)}}{{\partial t}} = - \frac{{i_{\rm{neg}}^sM_p}}{{S_{\rm{neg}}\rho _pF}}$$
where \(R_f\) is the total SEI film resistance, \(t\) is the time, \(R_{f,{\rm{ini}}}\) is the initially formed SEI layer resistance, \(R_f(t)\) is the produced film resistance in the cycling, \(L(t)\) is the SEI film thickness, \(\kappa _p\) is the film conductivity, and \(M_p\) and \(\rho _p\) are the SEI molecular weight and the density respectively. In the positive electrode, side reactions can also result in severe active materials loss, where the volume fraction change of solid phase (LMO) and the specific area variation can be formulated by42
$$\frac{{\partial \phi _{\rm{pos}}}}{{\partial t}} = - r_{\mathrm e}S_{\rm{pos}}V_0,\,S_{\rm{pos}} = \frac{{3\phi _{\rm{pos}}}}{{R_{p,\rm{pos}}}}$$
where \(\phi _{\rm{pos}}\) is the volume fraction of active LMO, \(S_{\rm{pos}}\) is the specific area in positive electrode, \(V_0\) is the molar volume of LMO, \(R_{p,\rm{pos}}\) is the radius of the spherical electrode particles, and \(r_{\mathrm e}\) is the kinetic rate battery electrolyte decomposition, \(r_{\mathrm e} = k_{\mathrm e}c_{\rm{H}_{2}O}^2c_{\rm{Li}^ + }\)43.
In addition to these side reactions, the LMO/carbon battery also includes complicated transport processes. According to ref. 44, the charge transport, mass transfer, and energy transport processes in the battery can be formulated by
$$\begin{array}{*{20}{c}} {\rm{Charge}} & : & {\nabla \bullet \left( { - \sigma _j^{\rm{eff}}\nabla \phi _{1,j}} \right) = i_j^{\rm{tot}}{\mathrm{,}}} \\ {} & {} & {\nabla \bullet \left( { - \kappa _j^{\rm{eff}}\nabla \phi _{2,j}} \right) + \frac{{2RT\left( {1 - t_ + ^0} \right)}}{F}\nabla \left( { - \kappa _j^{\rm{eff}}\nabla (\ln c_j)} \right) = i_j^{\rm{tot}}} \end{array}$$
$$\begin{array}{*{20}{c}} {\rm{Mass}} & : & {\frac{{\partial c_j^{}}}{{\partial t}} = D_j^s\,\frac{1}{{r^2}}\,\frac{\partial }{{\partial r}}\left( {r^2\frac{{\partial c_j^{}}}{{\partial r}}} \right){\mathrm{,}}} \\ {} & {} & {\varepsilon _j\,\frac{{\partial c_j}}{{\partial t}} = \frac{\partial }{{\partial x}}\left( {D_j^{\rm{eff}}\frac{{\partial c_j}}{{\partial x}}} \right) + \frac{{\left( {1 - t_ + ^0} \right)i_j^{\rm{tot}}}}{F}} \end{array}$$
$${\rm{Energy}}:\rho c_p\,\frac{{\partial T}}{{\partial t}} + \nabla \bullet \left( { - \lambda \nabla T} \right) = Q_i,\,Q_i = Q_{\rm{rxn}} + Q_{\rm{rev}} + Q_{\rm{ohm}}$$
where \(\sigma _j^{\rm{eff}}\) and\(\kappa _j^{\rm{eff}}\) are the effective conductivities in solid phase and liquid phase respectively, \(\phi _{1,j}\), \(\phi _{2,j}\) are the electrode and electrolyte potentials respectively, \(t_ + ^0\)is the transference number of lithium-ion, \(c_j^{}\) is the lithium-ion concentration, \(D_j^s\) is its diffusion coefficient in solid materials, \(\varepsilon _j\) is the electrode porosity, \(D_j^{\rm{eff}}\) is the effective diffusion coefficient23, \(c_p\) is the specific heat capacity, \(\lambda\) is the heat conductivity, \(Q_i\) is the heat source term43, which is composed of total reaction heat generation \(Q_{\rm{rxn}}\), total reversible heat production \(Q_{\rm{rev}}\), and total Ohmic heat production \(Q_{\rm{ohm}}\). The supplementary formula and expressions can be referred to our previous study on LMO–graphite battery23.
The developed mathematical models are solved using finite element package COMSOL Multiphysics and MATLAB software. Two model geometries are applied: a one-dimensional lithium ion battery model and a two-dimensional electrode solid phase model. Two sub-models are coherently coupled in such that the concentration of lithium ions obtained in the 2D solid phase model is projected to the 1D battery model, while the mass flux from 1D battery model is extracted to the 2D solid phase model boundaries. The applied boundary conditions and the associated model parameters are summarized in Supplementary Table 1 and Table 2.
The cycling capacity loss (\({\rm{CL}}_{a,{\rm{cyc}}}\)) then can be calculated by
$${\rm{CL}}_{a,{\rm{cyc}}} = \frac{{\mathop {\sum}\nolimits_{m = 1}^C {I(t_m - t_{m + 1})} }}{{I \times t_1}}$$
where \(C\) is the needed charge–discharge cycle number of EV battery in one year to meet the travel demand, \(I\) is the average charging current density, and \(t_m\) is the time needed to get the EV battery fully charged in \(m{\rm{th}}\) cycle.
(2) Calendar capacity loss: The calendar capacity loss takes place during battery energy storage, and mainly caused by battery self-discharge and side reactions. According to ref. 17, the battery calendar capacity loss follows Arrhenius-form kinetics, and an empirical expression based on the experimental data is formulated as
$${\rm{Cl}}_{a,{\rm{cal}}} = 14,876 \times {\rm{exp}}\left( {\frac{{ - E_a}}{{RT}}} \right)\psi _d\left( {t_h} \right)^{0.5}$$
where \({\rm{Cl}}_{a,{\rm{cal}}}\) is the percentage of calendar capacity loss, \(E_a\) is the activation energy, \(E_a = 24.5 {\rm{kJ}}\), \(R\) is the gas constant, Ψ d (x) is the time adjustment function, \(t_h\) stands for hour. The Supplementary Note 1, Supplementary Table 3–4, and Supplementary Figure 2 provide a detailed validation of the above battery capacity loss models.
EV GHG emission
The EV GHG emission from vehicle operation is calculated based on the energy consumptions of EV operation as described in Eq. (7) above and the electricity GHG emission factor in each state of US, as expressed below:
$$\begin{array}{*{20}{c}} {\left[ {\rm{GHG}}_{{t,1}} \right.} & \cdots & {{\rm{GHG}}_{t,s}} & \cdots & {\left. {{\rm{GHG}}_{t,50}} \right]} & = & {\left[ {E_{t,1}} \right.} & \cdots & {E_{t,s}} & \cdots & {\left. {E_{t,50}} \right]} \\ {} & {} & {} & {} & {} & {} & { \circ \left[ {U_{G,1}} \right.} & \cdots & {U_{G,s}} & \cdots & {\left. {U_{G,50}} \right]} \end{array}$$
where GHGt,s is the GHG emissions from vehicle transport energy consumption, \(E_{t,s}\) is the transport energy consumption (kWh), subscript s stands for state, \(U_{G,s}\) is the unit electricity GHG emission factor (CO2,eq g km−1), which is determined from the electricity fuel mix data from the eGRID2012 report published in 201534 and the imports of electricity from other states45, as provided in Supplementary Data 17.
A sensitivity analysis is conducted to evaluate the viability and robustness of the results relative to the change of the important factors including EV travel demand, electricity GHG emission factor, battery degradation limit for replacement, and capacity accessible ratio. The baseline scenario is with all the current data and results as reported in the paper. The evaluated factors are changed within a reasonable range of their baseline value, and the corresponding changes of the unit GHG emissions (CO2,eq g km−1) to the change of each factor is quantified and benchmarked with the baseline scenario, as shown in Supplementary Figure 6a–d and Supplementary Note 2.
All data generated or analyzed during this study are included in this published article as Supplementary Data.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
United States Environmental Protection Agency. Inventory of US Greenhouse Gas Emissions and Sinks : 1990–2013. Report No. EPA 430-R-15-004 (EPA, 2015).
United States Environmental Protection Agency. EPA and NHTSA Set Standards to Reduce Greenhouse Gases and Improve Fuel Economy for Model Years 2017–2025 Cars and Light Trucks.Report No. EPA-420-F-12-051 (EPA, 2012).
American Recovery and Reinvestment Act (ARRA) of 2009. (Pub.L. 111-5) (U.S. Congress, 2009)
Hartman, K. State efforts promote hybrid and electric vehicles. National Conference of State Legislatures http://www.ncsl.org/research/energy/state-electric-vehicle-incentives-state-chart.aspx (2015).
Becker, T. A., Sidhu, I. & Tenderich, B. Electric Vehicles in the United States: A New Model with Forecasts to 2030. Technical Brief 2009.1. (Centre for Entrepreneurship & Technology, 2009).
Helmers, E., Dietz, J. & Hartard, S. Electric car life cycle assessment based on real-world mileage and the electric conversion scenario. Int. J. Life Cycle Assess. 10, 1007 (2015).
Offer, G. J. Automated vehicles and electrification of transport. Energy Environ. Sci. 8, 26–30 (2015).
Onat, N. C., Kucukvar, M. & Tatari, O. Conventional, hybrid, plug-in hybrid or electric vehicles? State-based comparative carbon and energy footprint analysis in the United States. Appl. Energy 150, 36–49 (2015).
Faria, R. et al. Impact of the electricity mix and use profile in the life-cycle assessment of electric vehicles. Renew. Sustain. Energy Rev. 24, 271–287 (2013).
Rangaraju, S., De Vroey, L., Messagie, M., Mertens, J. & Van Mierlo, J. Impacts of electricity mix, charging profile, and driving behavior on the emissions performance of battery electric vehicles: a Belgian case study. Appl. Energy 148, 496–505 (2015).
Tamayao, M.-A. M., Michalek, J. J., Hendrickson, C. & Azevedo, Is. M. Regional variability and uncertainty of electric vehicle life cycle CO2 emissions across the United States. Environ. Sci. Technol. 49, 8844–8855 (2015).
Kambly, K. & Bradley, T. H. Geographical and temporal differences in electric vehicle range due to cabin conditioning energy consumption. J. Power Sources 275, 468–475 (2015).
Yuksel, T. & Michalek, J. J. Effects of regional temperature on electric vehicle efficiency, range, and emissions in the United States. Environ. Sci. Technol. 49, 3974–3980 (2015).
Notter, D. et al. Contribution of Li-ion batteries to the environmental impact of electric vehicles. Environ. Sci. Technol. 44, 6550–6556 (2010).
Hawkins, T. R., Singh, B., Majeau-Bettez, G. & Strømman, A. H. Comparative environmental life cycle assessment of conventional and electric vehicles. J. Ind. Ecol. 17, 53–64 (2013).
Barré, A. et al. A review on lithium-ion battery ageing mechanisms and estimations for automotive applications. J. Power Sources 241, 680–689 (2013).
Wang, J. et al. Degradation of lithium ion batteries employing graphite negatives and nickel–cobalt–manganese oxide + spinel manganese oxide positives: Part 1, aging mechanisms and life estimation. J. Power Sources 269, 937–948 (2014).
Vetter, J. et al. Ageing mechanisms in lithium-ion batteries. J. Power Sources 147, 269–281 (2005).
Lohse-Busch, H., Duoba, M., Rask, E., Meyer, M. & APRF & Co. Advanced Powertrain Research Facility AVTA Nissan Leaf Testing and Analysis (U.S. Department of Energy, 2012).
Kim, G.-H. et al. Thermal management of batteries in advanced vehicles using phase-change materials. The World Electr. Veh. J. 2.2, 0134–0147 (2008).
Pesaran, A. A., Vlahinos, A. & Burch, D. Thermal Performance of EV and HEV Battery Modules and Packs (National Renewable Energy Laboratory, 1997).
Lin, X. et al. A comprehensive capacity fade model and analysis for li-ion batteries. J. Electrochem. Soc. 160, A1701–A1710 (2013).
Xie, Y., Li, J. & Yuan, C. Multiphysics modeling of lithium ion battery capacity fading process with solid-electrolyte interphase growth by elementary reaction kinetics. J. Power Sources 248, 172–179 (2014).
Saxton, T. Plug in America's LEAF battery survey. Plug In America http://www.pluginamerica.org/surveys/batteries/leaf/Leaf-Battery-Survey.pdf (2012).
Federal Highway Administration. Highway Statistics 2013 (Federal Highway Administration, Washington, DC, 2014).
Stutenberg, K. Advanced Technology Vehicle Lab Benchmarking - Level 1 (Argonne National Laboratory, 2014).
United States Environmental Protection Agency. Dynamometer drive schedules. EPA https://www.epa.gov/vehicle-and-fuel-emissions-testing/dynamometer-drive-schedules (2017).
Nissan Motor Company Ltd. Lithium-ion battery limited warranty. Nissan Leaf 2015. Nissan http://www.nissanusa.com/electric-cars/leaf/charging-range/battery/ (2015).
Chevrolet. GM Volt: battery. Chevrolet http://www.chevrolet.com/volt-electric-car.html (2015).
Keyser, M., Smith, K. & Pesaran, A. Battery Thermal Modeling and Testing (National Renewable Energy Laboratory, 2011).
Allen, M. Electric Range for the Nissan Leaf & Chevrolet Volt in Cold Weather (FleetCarma, 2013).
Popp, H., Attia, J., Delcorso, F. & Trifonova, A. Lifetime analysis of four different lithium ion batteries for (plug-in) electric vehicle. In: Transport Research Arena (TRA) 5th Conference: Transport Solutions from Research to Deployment (The National Academies of Sciences, Engineering & Medicine, 2014).
Krieger, E. M. & Arnold, C. B. Effects of undercharge and internal loss on the rate dependence of battery charge storage efficiency. J. Power Sources 210, 286–291 (2012).
United States Environmental Protection Agency. eGRID2012. EPA, http://www.epa.gov/energy/egrid (2015).
Yuan, C., Deng, Y., Li, T. & Yang, F. Manufacturing energy analysis of lithium ion battery pack for electric vehicles. CIRP Ann. 66, 53–56 (2017).
Ellingsen, L. A.-W. W. et al. Life cycle assessment of a lithium-ion battery vehicle pack. J. Ind. Ecol. 18, 113–124 (2013).
Kim, H. C. et al. Cradle-to-gate emissions from a commercial electric vehicle Li-ion battery: a comparative analysis. Environ. Sci. Technol. 50, 7715–7722 (2016).
Federal Highway Administration. National Household Travel Survey (NHTS) 2009 (U.S. Deptartment of Transportation, Washington, DC, 2011).
National Oceanic and Atmospheric Administration. NOAA's 1981–2010 U.S. Climate Normals. Report No. 0003-0007 (NOAA, 2012).
Ramadesigan, V. et al. Modeling and simulation of lithium-ion batteries from a systems engineering perspective. J. Electrochem. Soc. 159, R31–R45 (2012).
Xie, Y., Li, J. & Yuan, C. Mathematical modeling of the electrochemical impedance spectroscopy in lithium ion battery cycling. Electrochim. Acta 127, 266–275 (2014).
Cai, L. et al. Life modeling of a lithium ion cell with a spinel-based cathode. J. Power Sources 221, 191–200 (2013).
Dai, Y., Cai, L. & White, R. E. Capacity fade model for spinel LiMn2O4 electrode. J. Electrochem. Soc. 160, A182–A190 (2013).
Cai, L. & White, R. E. Mathematical modeling of a lithium ion battery with thermal effects in COMSOL Inc. Multiphysics (MP) software. J. Power Sources 196, 5985–5989 (2011).
U.S. Energy Information Administration. State Electricity Profiles Data for 2013 (U.S. Deptartment of Energy, Washington, DC, 2016).
Plot.ly (Plotly, 2012).
Financial support from National Science Foundation (CBET-1744031), Argonne National Laboratory and Case Western Reserve University are acknowledged.
These authors contributed equally: Fan Yang, Yuanyuan Xie.
Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA
& Chris Yuan
Chemical Science and Engineering, Argonne National Laboratory, Argonne, 60439, IL, USA
Yuanyuan Xie
Department of Mechanical Engineering, University of Wisconsin, Milwaukee, WI, 53211, USA
Yelin Deng
Search for Fan Yang in:
Search for Yuanyuan Xie in:
Search for Yelin Deng in:
Search for Chris Yuan in:
C.Y., F.Y., and Y.X. designed the research and conceived the paper; F.Y. and Y.D. developed the EV energy consumption and greenhouse gas emission model; Y.X. and F.Y. deveoped and validated the EV battery degradation model; F.Y. and Y.X. performed the analysis; F.Y. and C.Y. drew the figures; and C.Y. and F.Y. wrote the paper.
Correspondence to Chris Yuan.
Supplementary Data 1
Supplementary Data 10
Description of Additional Supplementary Files
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
https://doi.org/10.1038/s41467-018-04826-0
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Nature Communications menu
Editors' Highlights
Top 50 Read Articles of 2018 | CommonCrawl |
\begin{document}
\title {Spin bath narrowing with adaptive parameter estimation} \author{Paola Cappellaro} \email{[email protected]} \affiliation{Nuclear Science and Engineering Department and Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \begin{abstract} We present a measurement scheme capable of achieving the quantum limit of parameter estimation using an adaptive strategy that minimizes the parameter's variance at each step. The adaptive rule we propose makes the scheme robust against errors, in particular imperfect readouts, a critical requirement to extend adaptive schemes from quantum optics to solid-state sensors. Thanks to recent advances in single-shot readout capabilities for electronic spins in the solid state (such as Nitrogen Vacancy centers in diamond), this scheme can be as well applied to estimate the polarization of a spin bath coupled to the sensor spin. In turns, the measurement process decreases the entropy of the spin bath resulting in longer coherence times of the sensor spin. \end{abstract} \maketitle
A common strategy for estimating an unknown parameter associated with a field is to prepare a probe and let it interact with the parameter-dependent field. From the probe dynamics it is possible to derive an estimator of the parameter. The process is repeated many times to reduce the estimation uncertainty. A more efficient procedure takes advantage of the partial knowledge acquired in each successive measurement to change the probe-field interaction in order to optimize the uncertainty reduction at each step. This adaptive Bayesian estimation strategy has been proposed to improve the sensitivity of parameter estimation in quantum metrology~\cite{Wiseman97}. It has been shown that adaptive estimation can achieve the Heisenberg or quantum metrology limit (QML) without the need for entangled states~\cite{Higgins07,Berry09,Said11,Nusran12,Waldherr12}. Here we introduce a novel adaptive scheme that attains the QML, as manifested by various statistical metrics of the estimated parameters. In addition, the proposed scheme can be made robust against errors so that the QML is achieved e.g. even for imperfect readouts, a critical requirement to extend adaptive schemes from quantum optics to solid-state sensors. We further present an application of the adaptive scheme to the measurement of a quantum parameter: given single-shot readout capabilities for electronic spins in the solid-state~\cite{Robledo11,Morello10,Elzerman04}, the scheme could be used to create a narrowed state of a surrounding spin bath, thus increasing the sensor coherence. In this context, the QML scaling translates into a shorter time for the narrowing process, an important feature when dealing with a finite bath relaxation time.
Consider a two-level system $\{\vert{0}\rangle,\vert{1}\rangle\}$ interacting with an external field characterized by the parameter $b$, ${\mathcal{H}}=b\sigma_z$. A typical situation is a sensor spin-$\half$ interacting with a magnetic field. The parameter can be estimated by a Ramsey experiment (Fig. \ref{fig:Ramsey}), where the probability of the system to be in the $\ket{m}$ state ($m=\{0,1\}$) at the end of the experiment is given by \begin{equation}
\mathcal{P}_\theta(m|b)=\half[1-(-1)^me^{-\tau/T_2}\cos(b\tau+\theta) ] \label{eq:RamseyProb} \end{equation} where $\theta$ is the phase difference between the excitation and readout pulses and we introduced a decay with a constant $T_2$ during the interrogation time $\tau$. If we have a prior knowledge of the parameter --described by an \textit{a priori} probability distribution (p.d.f.) $P^{(0)}(b)$-- the measurement updates our knowledge, as reflected by the \textit{a posteriori} probability:
$P(b|m)\propto P^{(0)}(b)\mathcal{P}_\theta(m|b).$
More generally, after each measurement we can update the probability for the \textit{phase} $\phi=b\tau$, so that after $n$ such measurements with outcomes $\vec{m}_n$, we have a p.d.f. \begin{equation}
P^{(n)}(\phi|\vec{m}_n)\propto P^{(n-1)}(\phi|\vec{m}_{n-1})\mathcal{P}_\theta(m_n|\phi) \label{eq:posterior} \end{equation} Thanks to the periodicity of the probability $P(\phi)$, we can expand it in Fourier series~\cite{Said11}, $P^{(n)}(\phi)=\sum_k p^{(n)}_ke^{ik\phi}$, so that we can rewrite Eq. (\ref{eq:posterior}) as \[ \begin{array}{ll} p_k^{(n)}\propto \half{p_k^{(n-1)}}&+\frac14{e^{-\tau/T_2}}\times\\&\left[e^{i(m_n\pi+\theta)}{p_{k-1}^{(n-1)}}+e^{-i(m_n\pi+\theta)}{p_{k+1}^{(n-1)}}\right] \end{array} \] The proportionality factor is set by imposing that $p_0^{(n)}=\frac1{2\pi}$ as required for a normalized p.d.f. We can further generalize this expression when the system is let evolve for an \textit{integer} multiple $t_n$ of the time $\tau$, thus obtaining a general update rule for the p.d.f.: \begin{equation} \begin{array}{ll} p_k^{(n)}\propto& \half{p_k^{(n-1)}}+\frac14{e^{-t_n\tau/T_2}}\times\\& \left[e^{i(m_n\pi+\theta_n)}{p_{k-t_n}^{(n-1)}}\right.\left.+e^{-i(m_n\pi+\theta_n)}{p_{k+t_n}^{(n-1)}}\right] \end{array} \label{eq:update} \end{equation} An adaptive strategy will then seek to choose at each step the optimal $t_n$ and $\theta_n$ that lead to the most efficient series of $N$ measurements for a desired final uncertainty.
In order to design an adaptive strategy, we need to define a metric for the uncertainty (and accuracy) of the estimate. The Fourier transform of the p.d.f. can be used to calculate the moments of the distribution as well as other metrics and estimator. From the formula for the moments, $\ave{\phi^\alpha}=\int_{-\pi}^\pi P(\phi)\phi^\alpha d\phi=\sum_k p_k \int_{-\pi}^\pi e^{ik\phi}\phi^\alpha d\phi$ we can calculate the variance, \[\ave{\phi^2}-\ave{\phi}^2=\frac{2 \pi ^3}{3}p_0+4\pi\sum_{k\neq0}\frac{(\text{-}1)^k}{k^2}p_k-\ave{\phi}^2, \] where the average is $\ave{\phi}=-2i\pi\sum_{k\neq0}\frac{(\text{-}1)^k}kp_k.$
The variance is often not the best estimate of the uncertainty for a periodic variable~\cite{Berry09}. A better metric is the Holevo variance~\cite{Holevo84}, \begin{equation}
V_H=(2\pi|\ave{e^{i\phi}}|)^{\text{-}2}-1=(2\pi|p_{\text{-}1}|)^{\text{-}2}-1, \label{eq:holevo} \end{equation} where we used the fact that $\ave{e^{i\phi}}=p_{\text{-}1}$. We further notice that while the absolute value of $p_{\text{-}1}$ gives the phase estimate uncertainty, its argument provides an unbiased estimate of $\phi$. More generally, estimates are given by $\phi_{est}=\arg(\ave{e^{it\phi}})/t=\arg(p_{-t})/t$, giving a new meaning to the Fourier coefficients of the p.d.f.
The goal of the estimation procedure is then to make $|p_{\text{-}1}|$ as large as possible.
Assume for simplicity $\phi=b\tau=0$ and neglect any relaxation. Then the probability of the outcome $m_s=0$ is $\mathcal{P}_\theta(0|0)=\half(1-\cos\theta)$. We assume that we do not have any a priori knowledge on the phase, so that $P^{(0)}(\phi)=1/2\pi$. We fix the number of measurements, $N$, each having an interrogation time $T_n=t_n\tau=2^{N-n}\tau$~\cite{Giedke06,Boixo08,Said11}.
A potential strategy would be to maximize $|p_{-1}^{(n)}|$ at each step $n$. However, under the assumptions made, $p_{-1}^{(n)}=0$ until the last step, $n=N$, where it is \[p^{(N)}_{-1}=\frac{e^{-i(m_N\pi+\theta_N)}}{4\pi} \left(2\pi p^{(N-1)}_{-2}e^{2i\theta_N}+1\right)\] Writing $p^{(N-1)}_{-2}=qe^{i\chi}$, we have
\[4\pi|p^{(N)}_{-1}|=\sqrt{1+4\pi^2q^2+4\pi q\cos(\chi+2\theta_N)}\]
This is maximized for $\theta_N=-\chi/2=\half\arg(p^{(N-1)}_{-2})$ and by maximizing $q=|p^{(N-1)}_{-2}|$.
A similar argument holds for the maximization of $|p^{(N-1)}_{-2}|$: one has to set $\theta_{N-1}=\half\arg(p^{(N-2)}_{-4})$ and maximize $|p^{(N-2)}_{-4}|$. By recursion we have that at each step we want to maximize
\[|p^{(n)}_{-t_n}|=\left|\frac{e^{-i(m_n\pi+\theta_n)}}{4\pi} \left(2\pi p^{(n-1)}_{-t_{n-1}}e^{2i\theta_n}+1\right)\right|\] We have thus found a good adaptive rule, which fixes $t_n=2^{N-n}$ and $\theta_n=\half\arg\left(p^{(n-1)}_{-t_{n-1}}\right)$.
With this rule we obtain the standard quantum limit (SQL) for the phase sensitivity, as we now show. Using the optimal phase, the Fourier coefficients $p_{-t_n}^{(n)}$ are at each step \[p_{-t_n}^{(n)}=\half\left(\frac1{2\pi}+p_{-t_{n-1}}^{(n-1)}\right)=\frac1{2\pi}(1-2^{-n})\] Then, for a total number of measurements $N$, the Holevo variance is $V_H=(1-2^{-(N+1)})^{-2}-1\approx 2^{-N}$. The total interrogation time is $T=\tau(2^{N+1}-1)$ yielding \begin{equation} V_H(T)=\frac{4 T \tau }{(T-\tau )^2}\approx\frac{4\tau}T \label{eq:Holevo1pass} \end{equation}
\begin{figure}
\caption{ P.d.f (left) and its Fourier transform (right) after an 8-step adaptive measurement, with 1 (red, dashed) and 2 measurements per step (black). In the inset, Ramsey sequence.}
\label{fig:Ramsey}
\end{figure} We can improve the sensitivity scaling and reach the QML by a simple modification of this adaptive scheme. Instead of performing just one measurement of duration $t_n$ at each $n^{th}$ step, we perform two, updating the p.d.f. according to the outcomes. For $\phi=0$ the update rule at each step is now \[ p_k^{(n)}=\frac1{\mathcal{N}}\left[6p_k^{(n-1)}+4p_{k-t_n}^{(n-1)}+4p_{k+t_n}^{(n-1)}+p_{k-2t_n}^{(n-1)}+p_{k+2t_n}^{(n-1)}\right] \] with the normalization factor \[\mathcal{N}=2\pi\left[6p_0^{(n-1)}+p_{-2t_n}^{(n-1)}+p_{2t_n}^{(n-1)}\right].\] Restricting the formula above to the terms $p^{(n)}_{-t_n}$ gives \begin{equation}p^{(n)}_{-t_n}=\frac{\frac1{2\pi}+p^{(n-1)}_{-t_{n-1}}}{\pi\left(\frac3{2\pi}+p^{(n-1)}_{-t_{n-1}}\right)}\label{eq:twosteps} \end{equation} By recursion this yields
\[|p^{(n)}_{-t_n}|=\frac{1}{2 \pi }\left(1-\frac{3}{2^{2n+1}+1}\right),\] from which we obtain a Holevo variance that follows the QML, $V_H\approx3\cdot2^{-2N}$, or in terms of the total interrogation time \begin{equation} V_H=\frac{48 T \tau ^2 (T+4 \tau )}{(T-2 \tau )^2 (T+6 \tau )^2}\approx\frac{48\tau^2}{T^2}. \label{eq:Holevo2pass} \end{equation}
The classical and quantum scaling of the adaptive scheme with one or two measurements per step is confirmed by the p.d.f. obtained in the two cases (Fig. \ref{fig:Ramsey}). For one measurement, the final p.d.f Fourier coefficient are $|p_k|=\frac1{2\pi}(1-2^{-(N+1)}|k|)$ and the probability is well approximated by a sinc function, \[P^{(N)}(\phi)=\frac{2^{N+1}}{2\pi}\text{sinc}(2^{N+1}\phi)^2,\] which gives a variance $\sigma\approx2^{-N/2}$. For two measurements per step, instead, the p.d.f. is well approximated by a Gaussian (see Appendix) with a width $\sigma =\frac{\sqrt{3}}2 \cdot2^{-N}$.
We now consider possible sources of non-ideal behavior. The first generalization is to phases $\phi\neq0$. In this case, while the SQL is still achieved with the one-measurement scheme, two measurements per step do not always reach the QML. Indeed, at each step there is a probability $\mathcal{P}(1-\mathcal{P})$ that the two measurements will give different results; if this happens at the $n^{\text{th}}$ step, we obtain $p^{(n)}_{-t_n}=0$, thus failing to properly update the p.d.f. While the probability of failure is low, a solution could be to perform three measurements and update the p.d.f. only based on the majority vote.
We can further consider the cases where the signal decays due to relaxation or there is an imperfect readout. Then the probability (\ref{eq:RamseyProb}) becomes
\[\mathcal{P}_\theta(m|b)=\half[1-c(-1)^me^{-\tau/T_2}\cos(b\tau+\theta) ],\] with $c$ the readout fidelity. Considering the effects of only this (constant) term, the update rule Eq. \ref{eq:twosteps} becomes \[p^{(n)}_{-t_n}=\frac{c\left(\frac1{2\pi}+p^{(n-1)}_{-t_{n-1}}\right)} {\pi\left(\frac1{\pi}\left(1+\frac{ c^2}2\right)+c^2p^{(n-1)}_{-t_{n-1}}\right)}\] We can calculate a recursion relationship in the limit of good measurement, $\epsilon=(1-c)\approx0$, to obtain \[
|p_{-1}|\approx\frac{1}{2 \pi }\left(1-\frac{3}{2}(1+N\epsilon)2^{-2N}\right), \] which yields an Holevo variance $V_H\approx3(1+\epsilon N)2^{-2N}$ that does not follow anymore the QML scaling, except for $\epsilon N\sim1$. A similar, more complex result is expected if relaxation effects are taken into account (see Appendix). A strategy to overcome this limitation is to repeat the measurement at each step more than two times (Fig. \ref{fig:Holevo}). Specifically, setting the number of measurements M=$n+1$ (if allowed by relaxation constraints) restores the QML scaling. \begin{figure}
\caption{Holevo variance vs. total time $T=M\tau(2^{N+1}-1)$, with $\tau=1$. (\large$\bullet$\normalsize) with 1 measurement per step (M=1) $V_H$ follows the SQL. (\large$\circ$\normalsize) 2 measurements per step (M=2) achieve the QML. (\large$*$\normalsize) with c=0.95, the QML scaling is lost, but can be preserved for longer with M=4 (\footnotesize$\triangle$\normalsize) and restored (\footnotesize$\square$\normalsize) by setting M=$n+1$, even for lower c, e.g. c=0.85 ($\star$)}
\label{fig:Holevo}
\end{figure}
The proposed adaptive method promises to achieve Heisenberg-limited estimation of a classical phase without the need of fragile entangled states and thus it could improve the sensitivity e.g. of recently proposed magnetic sensors~\cite{Taylor08}. It can be as well used to measure a quantum variable, such as a phase resulting from the coupling of the sensor to a larger quantum system or bath. In turns, the measurement can be used to lower the entropy of the bath (usually a thermal equilibrium mixture) yielding an increase in the coherence time of the sensor~\cite{Giedke06,Imamoglu03,Klauser06}. The QML scaling of this adaptive method translates into a faster narrowing of the bath dispersion, which would improve similar schemes in solid-state systems~\cite{Bluhm10,Togan11}, where the bath itself might present fluctuations.
Specifically, we consider the coupling of a sensor spin to a spin bath. This situation is encountered in many physical systems, such as quantum dots~\cite{Hanson05,Bluhm11} or phosphorus donors in silicon~\cite{Morello10,Abe10}. Here we analyze as an example the system comprising
a Nitrogen-Vacancy (NV) center electronic spin coupled to the bath of nuclear $^{13}$C spins in the diamond lattice~\cite{Jelezko06,Childress06}. Recent advances in the measurement capabilities~\cite{Robledo11} offer single-shot read-out of the NV state, thus enabling adaptive schemes.
In a large magnetic field along the NV axis, the hyperfine interaction between the electronic spin and the nuclear spins is truncated to its secular part, ${\mathcal{H}}=S_z\sum_kA_kI_{z,k}=S_zA_z$ (where $S$ denotes the electronic spin, $I_k$ the nuclear spins). During a Ramsey sequence on resonance with the $m_s=0,1$ energy levels of the electronic spins, the coupled system evolves as \begin{equation} \ket{\psi(t)}=\left[\sin\left(A_zt\right)\vert{1}\rangle+\cos\left(A_zt\right)\vert{0}\rangle\right]\vert{\psi}\rangle_C, \label{eq:stateevolution} \end{equation} where $\vert{\psi}\rangle_C$ is the initial state of the nuclear spin bath. The measurement scheme (Ramsey followed by NV read-out) is a quantum non-demolition measurement\cite{Braginsky80,Mlynek97,Waldherr11} for the nuclear spins, since their observable does not evolve -- as long as the secular approximation holds. The adaptive process is then equivalent to determining the state-dependent (quantized) phase $\phi=\ave{A_zt}$. The uncertainty on the nuclear bath state, $\rho_C=\!\sum_\alpha p_\alpha \ket{\psi_\alpha}\!\!\bra{\psi_\alpha}_C$, is reflected in the p.d.f of the phase (with an injective relation if the operator $A_z$ has non-degenerate eigenvalues). Thus updating the phase p.d.f. will update the density operator describing the state of the nuclear bath. After each readout of outcome $m$, the system is in the state \begin{equation} \begin{array}{ll} \rho^{(n)}&\propto \ket{m}\!\bra{m}\!\rho^{(n-1)}\!\ket{m}\!\bra{m}\\
&= \ket{m}\!\!\bra{m}\sum_\alpha \mathcal{P}_\theta(m|\phi_\alpha)p_\alpha^{(n-1)}\ket{\psi_\alpha}\!\!\bra{\psi_\alpha}_C,\end{array} \label{eq:stateupdate} \end{equation}
with $\mathcal{P}_\theta(m|\phi_\alpha)\!=\!\left|\bra{m,\psi_\alpha}\!\left[\sin(A_zt)\vert{1}\rangle\!+\!\cos(A_zt)\vert{0}\rangle\right]\!\ket{\psi_\alpha}\right|^2\!$. Note that in this expression the probability update rule is equivalent to Eq. \ref{eq:posterior} and thus the adaptive procedure ensures that the final state has lower entropy than the initial one.
A difference between measuring a classical field and a quantum operator is that in the latter case the resulting phase is quantized, thus it has a discrete p.d.f.. An extreme case is when all the couplings to the $N_C$ nuclear spins are equal, $A_k=a$, $\forall k$. Then the eigenvalues are $na/2$, with $|n|\leq N_C$ integer, each with a degeneracy $d(n)=\binom{N_C}{N_C/2+n}$. While the adaptive scheme needs to be modified (e.g. by considering a discrete Fourier transform), we note that since all the eigenvalues are an integer multiple of the smallest, non-zero one ($a$ for $N_C$ even, $a/2$ for $N_C$ odd), we only need $M$ steps, with $2^M\geq \frac {N_C}2$ [with minimum interrogation time $\tau=2\pi/(a2^M)$], to achieve a perfect measurement of the degenerate phase $\phi$~\cite{Giedke06}. \begin{figure}\label{fig:NVsims}
\end{figure} In the more common scenario where $A_k$ varies with the nuclear spin position (and $N_C$ is large enough) the eigenvalues give rise to an almost continuous phase~\cite{Klauser06}, thus it is possible to directly use the adaptive scheme derived above.
As an example of the method, we consider one NV center surrounded by a bath of nuclear spins ($^{13}$C with 1.1\% natural abundance). At low temperature and for NV with low strain, it is possible to perform single-shot readout of the electronic spin state with high fidelity in tens of $\mu$s~\cite{Robledo11}. Optical illumination usually enhances the electronic-induced nuclear relaxation~\cite{Jiang09}, due to the non-secular part of the hyperfine interaction.
This effect is however quenched in a high magnetic field ($B\geq 1$T) and the relaxation time is much longer than the measurement time ($T_1\geq 3$ms~\cite{Neumann10b}), sign of a good QND measurement.
We simulated the Ramsey sequence and adaptive measurement with a bath of $\sim 2600$ spins around the spin sensor in a large magnetic field. We considered the full anisotropic hyperfine interaction between the NV and the $^{13}$C spins and we took into account intra-bath couplings with a disjoint cluster approximation~\cite{SOM,Maze08b}. Even for the longest evolution time of the Ramsey sequence required by the adaptive scheme, the fidelity $F$ of the signal with the ideal Ramsey oscillation (in the absence of couplings) is maintained. After an 8-step adaptive measurement, the nuclear spin bath is in a narrowed state. We note that in general the adaptive scheme does not polarize the spin bath (indeed a final low polarization state is more probable). However, the bath purity is increased, which is enough to ensure longer coherence times for the sensor spins, since it corresponds to a reduced variance of the phase and hence of the sensor spin dephasing. In Fig.~\ref{fig:NVsims} we compare the NV center spectrum for an evolution under a maximally mixed nuclear spin bath and under the narrowed spin bath. The figure shows a remarkable improvement of the NV coherence time.
In conclusion, we described an adaptive measurement scheme that has the potential to achieve the quantum metrology limit for a classical parameter estimation. We analyzed how imperfections in the measurement scheme affect the sensitivity and proposed strategies to overcome these limitations. This result could for example improve the sensitivity of spin-based magnetometers, without recurring to entangled states. In addition, we applied the scheme to the measurement of a quantum parameter, such as arising from the coupling of the sensor to a large spin bath. We showed that the adaptive scheme can be used to prepare the spin bath in a narrowed state: as the number of possible configurations for the spin bath is reduced, the coherence time of the sensor is increased. The scheme could then be a promising strategy to increase the coherence time of qubits, without the need of dynamical decoupling schemes that have large overheads and interfere with some magnetometry and quantum information tasks.
\textbf{Acknowledgments} -- This research was supported in part by the U.S. Army Research Office through a MURI grant No. W911NF-11-1-0400.
\appendix
\onecolumngrid \section{Probability distribution for the adaptive scheme with two measurements per step} In the main text we considered the adaptive scheme where at each step two measurements (with the same reading time) are carried out and presented an approximate formula for the p.d.f that is obtained after $N$ steps. Here we present details of the derivation for the case where $\phi=0$. \\ In the Fourier space, the p.d.f. coefficients can be calculated from the recursive relation to be
\[p_k=\left\{\begin{array}{ll}
\frac1{2\pi}\frac{\left(2^{N+2}-1-|k|\right) \left(2^{N+2}-|k|\right) \left(2^{N+2}+1-|k|\right)
-4 \left(2^{N+1}-1-|k|\right) \left(2^{N+1}-|k|\right) \left(2^{N+1}+1-|k|\right)} { \left(2^{N+2}+2^{3 N+5}\right)} &k\leq2^{N+1}-2\\
\frac1{2\pi}\frac{\left(2^{N+2}-1-|k|\right) \left(2^{N+2}-|k|\right) \left(2^{N+2}+1-|k|\right)}{ \left(2^{N+2}+2^{3 N+5}\right)} &k>2^{N+1}-2 \end{array}\right. \] For large $N$ we can simplify the expressions as: \[p_k=\left\{\begin{array}{ll}
\frac1{2\pi}\frac{ \left(2^{N+2}-|k|\right)^3-4 \left(2^{N+1}-|k|\right)^3}
{ \left(2^{N+2}+2^{3 N+5}\right)}\approx \frac1\pi\left[\left(1-2^{-(N+2)}|k|\right)^3-\half\left(1-2^{-(N+1)}|k|\right)^3\right] &k\leq2^{N+1}-2\\
\frac1{2\pi}\frac{ \left(2^{N+2}-|k|\right)^3}{ \left(2^{N+2}+2^{3 N+5}\right)} \approx \frac1\pi\left(1-2^{-(N+2)}|k|\right)^3 &k>2^{N+1}-2 \end{array}\right. \]
In turn, these expressions are well approximated by a Gaussian (although the original function has longer tails), \[p_k\approx \frac{e^{-3 k^2 2^{-2 N-3}}}{2 \pi }\] with Fourier transform \[P(\phi)\approx\frac{e^{-\frac{2}{3}\left(2^N \phi \right)^2}}{2^{-N}\sqrt{{3 \pi }/{2}} }\]
\section{Influence of noise on the adaptive scheme} \begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \includegraphics[width=0.5\textwidth]{AdaptT2}
\end{center} \caption{(color online) Holevo variance vs. total time in the presence of signal decay. Black lines with circles: $M=2$ measurements per step. Red lines with squares: M=$n+1$ measurements at the $n^{th}$ step. The relaxation constant $T_2$ was taken to be $T_2/\tau=5\times10^{-4}$ (dotted line), $10^{-3}$ (solid lines), $2.5\times10^{-3}$ (dash-dotted lines) and $5\times10^{-3}$ (dashed lines). The total time $T$ was measured in units of the dimensionless time $\tau$.} \end{wrapfigure}
In the main text we discussed the effects on the adaptive scheme of imperfect readout of the sensor state. A similar effect is expected as well if the signal decays due to decoherence during the Ramsey interrogation time, as the difference in the probability of getting a different result ($m=0,1$) given a different phase is reduced by a factor $e^{-\tau/T_2}$:
\[\mathcal{P}_\theta(1|b)-\mathcal{P}_\theta(0|b)=\half e^{-\tau/T_2}\cos(b\tau+\theta).\] The time-dependence of such an imperfection, though, complicates the recursive relationship (Eq.~3 of the main text), thus we analyze the effect of decoherence numerically. We find that the decay effectively sets a maximum number of measurements (see figure), since the interrogation time cannot exceed $T_2$. A small improvement is achieved by increasing the number of measurements per step, but the QML is not recovered at longer times.
Another source of imperfection would derive from variations of the phase during the total estimation time. The rate of this variations sets an upper limit to the number of steps in the adaptive scheme.
\twocolumngrid
\end{document} | arXiv |
Holt McDougal Larson Traditional Series (2012)
Houghton Mifflin Harcourt | High School
Home Reports Center Math High School Holt McDougal Larson Traditional Series
Holt McDougal Larson Traditional Series - High School
The instructional materials reviewed for the Larson Traditional Series do not meet expectations for alignment to the CCSSM for high school. The instructional materials attend to the full intent of the high school standards and spend a majority of time on the widely applicable prerequisites from the CCSSM. However, the instructional materials partially attend to engaging students in mathematics at a level of sophistication appropriate to high school and explicitly identifying standards from Grades 6-8 and building on them to the High School Standards. Since the materials do not meet the expectations for focus and coherence, evidence for rigor and the mathematical practices in Gateway 2 was not collected.
The instructional materials reviewed for the Larson Traditional Series do not meet the expectation for focusing on the non-plus standards of the CCSSM and exhibiting coherence within and across courses that is consistent with a logical structure of mathematics. The instructional materials attend to the full intent of the high school standards and spend a majority of time on the widely applicable prerequisites from the CCSSM. The instructional materials partially attend to engaging students in mathematics at a level of sophistication appropriate to high school and explicitly identifying standards from Grades 6-8 and building on them to the High School Standards. The materials do not attend to the full intent of the modeling process when applied to the modeling standards, allowing students to fully learn each non-plus standard, and making connections within courses and across the series.
The instructional materials reviewed for the Larson Traditional series meet expectations for attending to the full intent of the mathematical content contained in the high school standards for all students.
The following are examples of standards that are fully addressed:
A-APR.3: In Algebra I, Chapter 8, Lessons 4 and 5 and Chapter 9, Lesson 2, Extension Activity, students identify the zeros of polynomials using the zero-product property and use the x-intercepts to write the quadratic function in factored form to graph the function. In Algebra II, Chapter 2, Lessons 5, 6, and 8, students work with quadratics, find the zeros through factoring and from a graph, and extend their knowledge to finding rational zeros through the use of The Rational Zero Theorem, The Remainder Theorem, and The Factor Theorem. Each of these work to scaffold student learning so that they are able to analyze graphs of polynomial functions using those zeros.
A-REI.4a: In Algebra I, Chapter 9, Lesson 5 and Algebra II, Chapter 1, Lesson 7, students solve quadratic equations by completing the square. In Algebra I, Derive the Quadratic Formula, page 619, students derive the quadratic formula.
F-BF.3: In Algebra I, Chapter 4, Lesson 1, Graphing Calculator Activity, students investigate linear equations and draw conclusions as to how various slopes and y-intercepts affect a linear function. Students explain how they found the answers and describe a process for finding an equation of a line that has a particular slope and passes through a specific point on page 231, Problems 9 and 10. In Algebra I, Chapter 5, Lesson 5, Extension Activity "Graph Absolute Value Functions," students apply transformations to compare various graphs with the graph of the parent function, $$f(x)=|x\vert$$. In Algebra I, Chapter 9, Lessons 1 and 5, students apply transformations to compare various graphs with the graph of the parent function $$f(x)=x^2$$. In Algebra II, Chapter 3, Lesson 5, students apply transformations to compare various graphs with the graph of the parent function $$f(x)=\sqrt{x}$$ and $$f(x)=\sqrt[3]{x}$$. In Algebra II, Chapter 4, Lessons 1 and 2, students apply transformations to compare various graphs with the graph of the parent function $$f(x)=b^x$$. In each of these lessons, students are provided with a number of opportunities to engage in using graphing calculators to explore the mathematics.
G-SRT.8: In Trigonometric Ratios of Complementary Angles, Apply the Law of Sines, and Apply the Law of Cosines and Chapter 7, page 474, problem 41, students make a conjecture about relationships, make a table, compare values, look for patterns, and explore whether their conjecture is true for triangles that are not special right triangles.
S-ID.6a: In Algebra I, Chapter 4, Lessons 6 and 7, students use correlation coefficient values, data, and scatterplots to make a line of best fit using linear regression. In Algebra I, Chapter 4, page 284, Extension, students calculate and interpret residuals. In Algebra I, page 636, Graphing Calculator Activity, the students use the data sets to find linear, quadratic and exponential regressions. In Algebra II, Chapter 4, Lesson 7, students use exponential regression to find models. In Algebra I, page 282-283, in the Internet Activity and Extension, students model data and then plot and interpret the residuals to determine correlation. In Algebra I, Chapter 9, Lesson 8, students compare linear, exponential and quadratic models using ordered pairs and table of values and use that information to make predictions.
The following are standards where some aspect of the non-plus standard is not addressed in the instructional materials.
A-REI.5: In Algebra I, Chapter 6, Lesson 4, students solve linear systems by elimination and use linear systems to solve problems on topics such as investments, farm products, and music. On page 398, problem 42, students explain what the answers mean in the context of the problem; however the students are not given the opportunity to "prove that, given a system of two equations in two variables, replacing one equation by the sum of that equation and a multiple of the other produces a system with the same solutions."
G-CO.7: In Geometry, Chapter 4, Lesson 3, Challenge Problems 18 and 19, students determine whether a rigid motion can move one triangle onto the other and justify their answer. The materials do not use the definition of congruence in terms of rigid motions to show that two triangles are congruent if and only if corresponding pairs of sides and corresponding pairs of angles are congruent.
S-ID.4: In Algebra II, Chapter 6, Lessons 3-5, students use the mean and standard deviation of a data set to fit it to a normal distribution and to estimate population percentages through the problems and activities. Students do not "recognize there are data for which such a procedure is not appropriate," and students do not use spreadsheets to "estimate areas under the normal curve."
The following standard was not addressed in the student or teacher materials.
G-SRT.1a: In Geometry, Chapter 6, Lessons 5 and 6, students work with dilations; however, the activity does not address a parallel line or a line passing through the center as indicated by the standard.
The instructional materials reviewed for the Larson Traditional series do not meet expectations for attending to the full intent of the modeling process when applied to the modeling standards. Most aspects of the modeling process are present in isolation or combinations, but students do not have to revise their process and/or solution after interpreting their solution in the context of the problem. Opportunities for students to engage in the complete modeling process are absent for the modeling standards throughout the instructional materials of the series.
In the series, many of the real-world problems provide students with all of the needed information, including variables. Some questions ask students to determine the relationship between the variables while others ask the students to find a solution. The materials do not provide the opportunity to make assumptions about the real-world problem as part of the modeling process. Occasionally, students draw conclusions, make interpretations, or justify how they arrived at a solution. Examples of how students do not engage in the full modeling process include:
In Algebra I, Chapter 7, Lesson 4, problem 41, students create a model which is based on the data given about two trees. All of the information needed to write the equation is provided in the problem or on the labeled diagram.
In Algebra I, Chapter 9, Lesson 9, Challenge problem 12, students are provided with an approximate population, the growth rate, and the defined variables. The directions in the materials state that the data be represented graphically and interpreted. Students do not analyze, validate, or report the results.
In Geometry, Chapter 7, Lesson 5, the materials provide real-world scenarios but do not allow for the complete modeling process. The majority of these problems ask the students to find a height or distance, and all the variables are provided for the students. Page 466, problem 36 uses the context of an eye chart, but the eye chart scenario isn't necessary to complete the problem. On page 466, problem 37, students are given the information about requirements for a wheelchair ramp.
In Geometry, Chapter 11, Lesson 7, problem 30, students find the volume of a small cone-shaped cup and a large cylindrical cup. Students also determine which container is a better buy based on their volume and the price. A diagram with the dimensions labeled is provided for the students, rather than allowing the students to create their own visual representation to support the modeling process. The volumes are computed based on given formulas without identifying the variables or drawing a model.
In Algebra II, Chapter 1, Lesson 5, problem 41, students create an equation to find the radius of a circular lot, solve for the radius, and generalize and justify the answers algebraically. Since each portion of the problem contains scaffolded questions for defining variables and determining which plan to follow or direction to take, students do not independently engage in the full modeling process.
In Algebra II, Chapter 4, Lesson 7, students write and apply exponential and power functions. On page 287, problem 34, students examine the relationship between the boiling point of water and atmospheric pressure by creating a scatter plot from given data, determining an equation from the given variables, and making a prediction. Students do not make assumptions about the given problem or determine the variables to be used, and results are not interpreted. On page 287, problem 35, students draw scatter plots, analyze the scatter plot to determine a function that would fit best, and create an equation based on the given variables. Variables are given, assumptions are not tested, and the interpretation is limited to the type of function that models the graph.
The instructional materials reviewed for the Larson Traditional series meet expectations, when used as designed, for spending the majority of time on the CCSSM widely applicable as prerequisites (WAPs) for a range of college majors, postsecondary programs, and careers.
The materials allow students to spend the majority of their time on the WAPs except for those standards that were not adequately addressed as noted in indicator 1ai. There is time spent with standards and plus standards from Grades 6-8, but that time does not detract from the students spending the majority of their time on the WAPs.
In Algebra I, students spend the majority of their time working with WAPs from Number and Quantity, Algebra, and Functions. Some lessons from Chapters 1 and 2 address content that is aligned to 6.EE, 7.EE.1-3, and 8.EE.7, but the majority of the chapters in Algebra I do not include distracting or additional topics.
In Geometry, students spend the majority of instructional time on WAPs from the Geometry category. In Chapter 4, the materials address congruence (G-CO.A, B and G-CO.10) with a few references to rigid transformations in Lessons 4, 5, 6, and 9. In Chapter 5, Lessons 2, 4, 5, and 6, students prove theorems that address G-CO.9,10. In Chapter 6, the materials address some standards from G-SRT.A, and in Chapter 7, Lessons 3, 5, and 6 address standards from G-SRT.B, C.
In Geometry, Chapter 6, Lessons 1 and 2, students use the Pythagorean Theorem, and a proof of the Pythagorean Theorem is shown (8.G.B). In Geometry, Chapter 11, students solve volume problems involving 3-D figures (6.G.4, 7.G.3,6, and 8.G.9), but the inclusion of these topics from Grades 6-8 does not detract from the students spending the majority of their time on the WAPs.
In Algebra II, students spend the majority of their time working with WAPs from Number and Quantity, Algebra, and Functions. A-SSE is addressed multiple times in lessons from Chapters 1, 2, 4, and 7, and F-IF is addressed multiple times in lessons from Chapters 1, 2, 3, 4, 5, 7, 9, and 10. N-RN is addressed throughout Chapter 3.
The instructional materials reviewed for the Larson Traditional series do not meet expectations, when used as designed, for allowing students to learn each non-plus standard fully. Standards which students do not get to learn thoroughly include:
A-APR.1: In Algebra 1, Additional Lessons 10 and 11 and Algebra II, Chapter 5, Lesson 5, Extension, students determine if a set is closed under an operation for integers and rational numbers. In Algebra I, Chapter 8, Lessons 1-3 and Algebra II, Chapter 2, Lesson 3, students practice operations on polynomials. However, within these lessons, students do not determine that polynomials are closed under the operations of addition, subtraction, and multiplication.
A-REI.6: In Algebra I, Chapter 6, Lessons 1-3, students solve systems of linear equations by graphing, substitution, and elimination. In each of the opportunities, students find exact solutions, but students do not have an opportunity to estimate a solution.
A-REI.10: In Algebra I, Chapter 3, Lesson 2, students graph linear equations. Students plot a few points and notice that the points "appear to lie on a line," but students do not have the opportunity to show understanding that the graph of the equation is the set of all its solutions plotted in the coordinate plane.
F-IF.6: In Algebra I, Chapter 9, Lesson 9, Graphing Calculator Activity, page 645, students calculate the rate of change between two given points for a linear, quadratic, and exponential function, but there is no context to interpret this change. Students do not estimate the rate of change from a graph. In Algebra II, Chapter 5, Lesson 7, students find the average rate of change between several intervals for an exponential function and use that to determine whether the graph is increasing or decreasing, but students do not estimate from a graph.
F-LE.1: In Algebra I, Chapters 5 and 7 and Algebra II, Chapter 4, students address linear and exponential functions; however students do not compare linear and exponential functions throughout these chapters. Students do not have opportunities to distinguish between situations that can be modeled with linear functions from those that can be modeled with exponential functions.
F-LE.1a: In Algebra I, Chapter 4, Lessons 1 and 2, students write and use linear equations in slope-intercept form, but students do not prove that linear functions grow by equal differences over equal intervals. In Algebra I, Chapter 7, Lessons 4 and 5, students write and graph exponential functions, but students do not prove that exponential functions grow by equal factors over equal intervals.
G-CO.2: In Geometry, Chapter 4, Problems 14, 15, and 16, students decide whether the transformation to move triangle MNP onto triangle PQM is a translation, reflection, or rotation. On the Chapter 6 Test, Problems 10 and 11, students determine whether the given dilation is a reduction or an enlargement and find its scale factor. On the Chapter 9 Test, students compare transformations by completing problems that involve translations, reflections, and rotations. However, students do not describe transformations as functions that take points in the plane as inputs and give other points as outputs.
G-CO.4: In the Geometry materials, students do not get to develop their own definitions of translations as moving points of a figure along parallel lines, reflections as moving points along line segments that are perpendicular to the line of reflection, or rotations as moving points around circles by angles of given measures.
G-CO.8: In Geometry, Chapter 4, there is an Investigation after Lesson 6 that addresses constructing congruent triangles using the SSS and SAS criteria. Students are shown how ASA, SSS, and SAS can follow from the definition of congruence in terms of rigid motions, but students do not explain independently how the criteria follow.
G-SRT.1: In Geometry. Chapter 6 Investigation after Lesson 1, students explore the properties of dilations, but the students do not verify the properties of dilations given by a center and a scale factor.
G-SRT.2: In Geometry, Chapter 6, Lesson 2, students use dilations to show figures are similar in Problems 13-16; however, students do not explain using similarity transformations, the meaning of similarity for triangles as the equality of all corresponding pairs of angles, and the proportionality of all corresponding pairs of sides.
G-SRT.7: In Geometry, Chapter 7, Lesson 6, problem 41, students make a conjecture about the relationship between sine and cosine values by creating a table and recognizing patterns. In Additional Lessons 2 and 3, the term cofunction is introduced, and students find the trigonometric values for complementary angles. Students do not use the relationship between the sine and cosine of complementary angles to solve problems.
G-GPE.7: In Geometry, Chapter 1, page 22, Problem 53 and page 50, Problem 8, students use coordinates to find the perimeter and area of a right triangle. There are no other opportunities for students to use coordinates to find perimeters of polygons and areas of triangles and rectangles.
S-ID.2: In Algebra I, Chapter 10, Lesson 2, the Extension introduces variance and standard deviation; however, the problems and questions include the calculation of standard deviation with one data set. On page 671, Problem 3, students calculate the mean, median, and mode of two data sets, but the problem does not expect students to use the shape of the data to compare the two data sets.
S-ID.5: In Algebra I, Chapter 10, Lesson 3, students create two-way frequency tables and interpret the data. Students do not analyze and recognize associations and trends within the data.
S-IC.6: In Algebra II, Chapter 6, Lesson 4, Problem 29, students examine a report from an election between Kosta and Murdock, determine it is reasonable to assume that Kosta will win the election, and explain their answer. In Lesson 5, students use a report to determine if the study described is a randomized comparative experiment. Students do not evaluate reports based on data as indicated by the standard.
The instructional materials reviewed for the Larson Traditional series partially meet expectations for engaging students in mathematics at a level of sophistication appropriate to high school. The instructional materials, at times, use age-appropriate contexts. However, some key takeaways from Grades 6-8 are not applied, and the materials do not vary the types of real numbers being used.
The materials provide a variety of problems within real-world contexts that are appropriate for high school students such as amusement parks, skateboard ramps, DVD players, sports, money, baking, video games, nutrition, and various job skills. Examples include the following:
In Algebra I, Chapter 6, Lesson 4, Problem 39, students use data from a table to create a system of equations and determine how many apple pies and batches of applesauce can be made if every apple is used.
In Algebra I, Chapter 9, Lesson 2, Problem 43, students use the equation of a parabolic arch in an aircraft hangar to determine how wide the hangar is at its base.
In Geometry, Chapter 1, Lesson 1, Problem 46, students identify points, lines, and planes in the context of different numbers of streets intersecting in a town to determine how many traffic lights would be needed.
In Geometry, Chapter 7, Lesson 1, Problem 34, students imagine there is a field in their town in the shape of a right triangle, find the perimeter of the field, and plant dogwood seedlings in the field at specified distances.
In Algebra II, Chapter 9, Lesson 4, Problem 39, the materials present a bike race where the bike passes an observer at 30 MPH. Students find the angle that the observer turns their head to see the cyclist t seconds later.
In Algebra II, Chapter 6, Lesson 4, Problems 30, 31, and 32, the contexts are a poll where 23% of the students surveyed say that math is their favorite subject in school, the number of voters who voted for candidate A or candidate B, and the number of people surveyed who prefer cola Y versus cola X. In these problems, students use statistical models, calculate the margin of error, and determine intervals that contain exact percentages.
Throughout the series, the majority of the problems utilize integers or simple rational numbers and do not vary the types of numbers being used. Students are provided few opportunities to practice with operations on a variety of rational and irrational numbers. Examples include the following:
In Algebra I, Chapter 2, Lessons 2-5 address solving equations, but the values within the equations are mostly whole numbers or simple rational numbers, such as $$\frac{1}{2}$$ or $$\frac{1}{4}$$.
In Algebra I, Chapter 6 addresses solving systems of linear equations. The majority of coefficients are either whole numbers, simple fractions, or decimals to the hundredths place.
In Geometry, the majority of problems use whole numbers or decimals to the tenths place. In Chapter 4, Lesson 1, students use the Triangle Sum Property, and the majority of the angle values are whole numbers.
In Geometry, Chapter 6, Lesson 4, the triangle side lengths are predominantly whole numbers.
In Algebra II, Chapter 2, Lesson 8, students analyze graphs of polynomial functions that include whole number values when using a graphing calculator to find minimum and maximum values.
In Algebra II, Chapter 9, Lesson 6, Problems 43-46 include whole number values when using trigonometric ratios.
Some key takeaways from Grades 6-8 are not applied, and examples of this include the following:
In Algebra I, Chapter 2, Lessons 6 and 7 address ratios and proportional relationships, but the problems use cross products rather than applying the connections between ratios, proportional relationships, and linear functions from Grades 6-8.
In Algebra I, Chapter 3, students write and graph linear equations, but the examples and problems do not apply ratios or proportional relationships.
In Geometry, Chapter 6, Lesson 1, the materials address similarity through proportional relationships, ratios, and scale factors (7.G.1); however, the problems and examples do not apply key takeaways from either 7.G.1 or 8.G.A.
In Algebra II, Chapter 9 addresses trigonometric functions and does not make connections to ratios or apply key takeaways from 8.EE.B or 8.G.A.
The instructional materials reviewed for the Larson Traditional series do not meet expectations for being mathematically coherent and making meaningful connections in a single course and throughout the series. The instructional materials do not foster coherence through meaningful mathematical connections in a single course and throughout the series, where appropriate and where required by the Standards.
The Additional Lessons within each course address specific standards, but they are placed at the beginning of each course and not connected to any chapters or lessons within the course. There is also no clear indication of when or how the Additional Lessons are to be used within the series, which disrupts the coherence of the materials. Examples regarding the Additional Lessons are included in the evidence below.
Examples where the materials do not foster coherence by omitting appropriate and required connections within courses include:
The Algebra I, Additional Lessons 2 and 3 (S-ID.7), Interpreting Linear Models are not connected to Algebra I, Chapter 3, Graphing Linear Equations and Functions.
The Algebra I, Additional Lessons 10 and 11 (A-APR.1), Investigate Polynomials and Closure are not connected to Algebra I Chapter 8, Polynomials and Factoring.
The Algebra I, Additional Lessons 14 and 15 (A-SSE.3), Write Quadratic Equations are not connected to Algebra I, Chapter 9, Quadratic Equations and Functions.
The Algebra I, Additional Lessons 16 and 17 address S-ID.3, but they are not connected to Chapter 10, Data Analysis.
In Geometry, the materials address transformations in Chapter 9, but there are no connections to the congruence of triangles in Chapter 4 or reasoning and proof throughout the course.
The Geometry, Additional Lessons 2 and 3 (G-SRT.7), Trigonometric Ratios of Complementary Angles are not connected to Chapter 7, Right Triangles and Trigonometry.
The Algebra II, Additional Lessons 2 and 3, which address N-CN.2 are not connected to Chapter 1, Lesson 6, Perform Operations with Complex Numbers.
The Algebra II, Additional Lessons 4 and 5 (A-APR.4), Use Polynomial Identities are not connected to Algebra II, Chapter 2, Polynomials and Polynomial Functions.
In Algebra II, Chapter 3, Lesson 5, students graph square root and cube root functions as transformations of parent functions, but there is no reference made to quadratic functions previously graphed as transformations of a parent function from Algebra II, Chapter 1, Graphing is addressed differently in both chapters, and connections are not made between the two chapters to increase coherence within the course.
Examples where the materials do not foster coherence by omitting appropriate and required connections between courses include:
In Algebra I, Chapter 8, Lesson 6, students solve a quadratic equation by factoring. In Algebra II, Chapter 1, Lesson 3, students also solve quadratic equations by factoring, but there is no connection to prior learning from Algebra I. In Algebra II, Chapter 1, Lessons 6 and 8, students solve quadratic equations that involve complex numbers and use the discriminant to determine the number and type of solutions for a quadratic equation, respectively. However, the different forms of quadratic equations are not coherently connected across Algebra I and II or within the chapters of Algebra II.
Algebra I, Chapter 11 and Geometry, Chapter 12, both titled "Probability," are identical. There is no connection between the chapters to build coherence between the courses.
The Algebra I, Additional Lessons 6 and 7, Use Inverse Functions are not connected to Algebra II, Chapter 3, Rational Exponents and Rational Functions.
The Algebra I, Additional Lessons 12 and 13 (F-BF.2), Translate Between Recursive and Explicit Rules for Sequences are not connected to Algebra II Chapter 7, Sequences and Series.
In Algebra II, Chapter 1, students graph from standard form in Lesson 1 and from vertex form in Lesson 2. There is no connection to Algebra I, Chapter 9, Lesson 1 where students graph as transformations (with no b value) or Algebra I, Chapter 9, Lesson 2 where students graph from standard form.
The instructional materials reviewed for the Larson Traditional series partially meet expectations for explicitly identifying and building on knowledge from Grades 6-8 to the High School Standards. The Plan and Prepare sections are included at the beginning of each chapter in order to assess, practice, and build on standards from previous grades, but in these sections, standards from Grades 6-8 are not explicitly identified.
A 4-year scope and sequence is provided that identifies skills and concepts to be taught in a Pre-Algebra course, but these skills and concepts are also not identified as standards from Grades 6-8. Many of the skills and concepts from the Pre-Algebra course are designated as Reinforce and Maintain under the Algebra 1 column of the scope and sequence document. Examples of these skills include: evaluate expressions with integer exponents, solve problems with proportional relationships, order of operations, 1-step, 2-step, and multi-step equations, ordered pairs, origin, axes, and graphing in four quadrants. Multiplying and dividing decimals by whole numbers, decimals by decimals, fractions by whole numbers, and fractions by fractions are also included.
Examples, where standards from Grades 6-8 are not identified, include:
In Algebra I, Chapter 1, Lessons 1 and 3, students evaluate and write expressions that align to 5.OA.1 and 6.EE.1,2, 6. For example, in Lesson 1, students evaluate "15x when x = 4, w - 8 when w = 20, and 5 + m when m = 7". In Lesson 3, students write algebraic expressions given the following information: "8 more than a number x, the product of 6 and a number y, and the difference of 7 and a number n".
In Algebra I, Chapter 1, Lesson 2, students apply order of operations that align to 6.EE.1 and 7.EE.1,2. For example, students evaluate "13 - 8 + 3, 8 - 22 and $$3\cdot6-4$$".
In Algebra I, Chapter 1, Lesson 4, students write equations and inequalities that align to 6.EE.9, and 7.EE.4. For example, "The sum of 42 and a number n is equal to 51; the difference of a number z and 11 is equal to 35; and the product of 4 and a number w is at most 51".
In Algebra I, Chapter 1, Lessons 7 and 8, students represent functions as rules, graphs, and tables that align to 8.F.1, 2.
In Algebra I, Chapter 2, Lesson 1, students find square roots and compare real numbers that align to 8.EE.2. A few of the problems include: "$$\sqrt{4}$$, $$-\sqrt{49}$$ and multiple choice, If x = 36, the value of which expression is a perfect square? A. $$\sqrt{x}+17$$ B. $$87-\sqrt{x}$$ C. $$5\cdot\sqrt{x}$$ D. $$5\cdot\sqrt{x}+2$$ ."
In Algebra I, Chapter 2, Lessons 2 and 3, students solve one-step and two-step equations that align to 6.EE.7 and 7.EE.4.
In Algebra I, Chapter 2, Lessons 4 and 5, students solve multi-step equations that align to 8.EE.7 and 7.RP.3.
In Geometry, Chapter 4, Investigating Geometry Activity before Lesson 1, students draw several triangles, tear off the corners of the triangles, and place the three angles from each triangle next to each other to form a straight angle (8.G.5).
In Geometry, Chapter 7, Lesson 1, students use the Pythagorean Theorem, apply it in real-world situations, and use it to find the distance between two points, which aligns to 8.G.B.
In the student materials, prerequisite content for each lesson is identified with "Before," and content from within the lesson is identified with "Now." Examples of the materials building on standards from Grades 6-8 through the "Before" and "Now" sections, even though the standards are not explicitly identified, include:
In Algebra I, Chapter 1, Lesson 1, Before: "You used whole numbers, fractions, decimals." Now: "You will evaluate algebraic expressions and use exponents."
In Algebra I, Chapter 1, Lesson 4, Before: "You translated verbal phrases into expressions" Now: "You will translate verbal sentences into equations or inequalities."
In Geometry, Chapter 4, Lesson 1, Before: "You classified angles and found their measures." Now: "You will classify triangles and find measures of their angles."
In Geometry, Chapter 7, Lesson 1 Before: "You learned about the relationships within triangles." Now: "You will find side lengths in right triangles."
The instructional materials reviewed for the Larson Traditional series explicitly identify the plus standards in the Correlation to Standards for Mathematical Content at the beginning of each course. In some instances, the plus standards are fully addressed and coherently support the mathematics which all students should study in order to to be college and career ready, but for others, the materials do not fully address the plus standards.
The plus standards that are addressed include.
N-CN.8: In Algebra II, Chapter 2, Lesson 7, students are introduced to the Complex Conjugates Theorem and use it to write a polynomial function. Through error analysis problems and the assessment, students apply this standard in different situations.
N-VM.6: In Geometry, Chapter 9, Lesson 2, students add, subtract, and multiply matrices. Students also use matrices to represent and manipulate data in multiple contexts such as softball, computers, swimming, agriculture, and art.
N-VM.7: In Geometry, Chapter 9, Lesson 7, students use scalar multiplication to simplify a product. Students also use scale factors of 2, ½, 3, 6 to represent dilations.
N-VM.8: In Geometry, Chapter 9, Lesson 2, students add, subtract, and multiply matrices of appropriate dimensions.
N-VM.9: In Geometry, Chapter 9, Lesson 2, students determine when matrices cannot be multiplied based on the dimensions of the matrices. Students explore the Commutative Property of Multiplication, the Associative Property of Multiplication, and the Distributive Property for matrix multiplication. The three properties are each addressed in one problem.
A-APR.5: In Algebra II, Chapter 6, Lesson 1, students apply the Binomial Theorem and make connections to Pascal's Triangle.
A-APR.7: In Algebra II, Chapter 5, Lessons 4 and 5, students add, subtract, multiply, and divide rational expressions.
F-BF.5: In Algebra II, Chapter 4, students understand the inverse relationship between exponents and logarithms and solve a variety of problems involving logarithms and exponents.
F-TF.7: In Algebra II, Chapter 10, Lesson 4, students evaluate and interpret trigonometric functions using technology in "Using Alternative Methods" on pages 642-643. Students also write and use the trigonometric functions in the context of a buoy's displacement.
G-SRT.9: In Algebra II, Chapter 9, Lesson 5, students use the formula for the area of a triangle that includes the sine function to solve problems. On page 591 challenge problem 42, students derive the formula for the area of a triangle that includes the sine function.
G-SRT.11: In Algebra II, Chapter 9, Lessons 5 and 6, students apply the Law of Sines and Cosines to find unknown measures in both right and non-right triangles that are a part of various real-world situations.
G-C.4: In Geometry, Chapter 10, Lesson 4, students construct tangent lines from a point outside a given circle to the circle.
G-GPE.3: In Algebra II, Chapter 8, Lesson 4, students write the equation of an ellipse using the foci in challenge problem 47. In Chapter 8, Lesson 5, students write the equation of a hyperbola in standard form using the distance formula, the foci, and the difference in the distance from a point on the hyperbola to the foci in challenge problem 37.
S-CP.8, 9: In Algebra I, Chapter 11, Lesson 2, students use factorials and permutations to determine the number of ways letters can be arranged and how many different ways six friends can sit together in a row of six empty seats at a movie theater. In Lesson 3, students examine various scenarios to determine whether to use a permutation or a combination. In problem 26, a teacher is going to choose two students to represent a class, and students calculate the probabilities of you and your best friend being chosen and you being chosen first and your best friend being chosen second. In Lesson 5, students find probabilities of independent and dependent events and use conditional probability in a variety of ways. Students are provided two pieces of information and asked to find the missing probability if the events are independent. Students then complete a similar problem for dependent events.
The plus standards that are partially addressed include:
N-CN.3: In Algebra II, Chapter 1, Lesson 6, students are introduced to complex numbers and complex conjugates, and in Chapter 2, Lesson 7, students find the conjugate of a complex number. Students do not use conjugates to find moduli and quotients of complex numbers.
N-CN.4: In Algebra II, Chapter 1, Lesson 6, students represent complex numbers on the complex plane in rectangular form. Students do not represent complex numbers on the complex plane in polar form, and students do not explain why the rectangular and polar forms of a given complex number represent the same number.
N-CN.5: In Algebra II, Chapter 1, Lesson 6, problems 70-73, students verify that the given properties extend to complex numbers, the commutative property of multiplication, the distributive property, the associative property of multiplication, the commutative property of addition and the associative property of addition. Students do not connect to the modulus or degree arguments.
N-CN.6: In Algebra II, Chapter 1, Lesson 6, students read Absolute Value of a Complex Number in a Key Concept box and find the absolute value of various complex numbers and sums of two complex numbers. Students do not find the midpoint of the segment at any time.
N-CN.9: In Algebra II, Chapter 2, Lesson 7, students are introduced to the Fundamental Theorem of Algebra and use it to write the number of solutions, zeros, and equation for polynomial functions. Students do not make these connections for quadratic functions.
N-VM.12: In Geometry, Chapter 9, Lesson 3, students use 2x2 matrices for transformations in the plane, but students do not interpret the absolute value of the determinant in terms of area.
F-TF.3: In Algebra II, Chapter 9, Lesson 3, students use special triangles to determine values of sine, cosine, and tangent. On page 571, the materials demonstrate using the unit circle to evaluate trigonometric functions when $$\Theta=270\degree$$. Students practice this skill in four problems, but students do not use the unit circle to express the values of sine, cosine, and tangent for $$\pi-x$$, $$\pi+x$$, and $$2\pi-x$$, in terms of their values for $$x$$.
F-TF.4: In Algebra II, Chapter 10, the period of each trigonometric function is explained using the graph of the functions, but there are no examples or problems that use the unit circle to explain the periodicity or symmetry of trigonometric functions.
F-TF.6: In Algebra II, Chapter 9, Lesson 4, students evaluate inverse trigonometric functions, but students do not restrict the domain of a trigonometric function so that its inverse can be constructed. Students are shown the inverse functions graphically, and the materials state that "domain restrictions allow the inverse sine, inverse cosine, and inverse tangent functions to be defined."
F-TF.9: In Algebra II, Chapter 10, Lesson 6, students are given the sum and difference formulas and then evaluate, rewrite in equivalent forms, and solve trigonometric equations. There is not a proof of the formulas or an opportunity for students to prove the formulas.
G-SRT.10: In Algebra II, Chapter 9, Lesson 6, students derive the Law of Cosines on page 597 in challenge Problem 42, but the proof of the Law of Sines is not found.
G-GMD.2: In Geometry, Chapter 11, Lesson 6, students use Cavalieri's principle to give an informal argument for the volume of cylinders but not for other solids. The volume of a sphere is addressed in Lesson 8, but Cavalieri's principle is not used in that lesson.
S-MD.1: In Algebra II, Chapter 6, Lesson 2, random variables are defined for a quantity of interest and graphed using histograms, but other types of graphical displays for probability distributions are not used.
S-MD.3, 4: In Algebra II, Chapter 6, Lesson 2, students construct and interpret binomial distributions and classify the distributions as symmetric or skewed. Students do not find the expected values of the distributions.
S-MD.6, 7: These standards are present in multiple locations throughout the series (Algebra II, Additional Lessons 10 and 11, Algebra I, Extension, pages 743-744, and Geometry, Extension, pages 847-848). However, since the materials include the same lesson in Algebra I, Algebra II, and Geometry, students use probabilities to make fair decisions once. Furthermore, Algebra I page 711, Problem 20 is repeated in Geometry, page 815, Problem 20.
The following plus standards are not addressed in the materials: N-VM.1, N-VM.2, N-VM.3, N-VM.4, N-VM.5, N-VM.10, S-MD.2, S-MD.5, A-REI.8, and A-REI.9.
Gateway Two Details
Materials were not reviewed for Gateway Two because materials did not meet or partially meet expectations for Gateway One
Gateway Three Details
This material was not reviewed for Gateway Three because it did not meet expectations for Gateways One and Two
Summary Gateway 1 Criterion 1a - 1f
Holt McDougal Algebra (Larson Series) 9780547647067 Houghton Mifflin Harcourt 2012
Holt McDougal Geometry (Larson Series) 9780547647081 Houghton Mifflin Harcourt 2012
Holt McDougal Algebra II (Larson Series) 9780547647111 Houghton Mifflin Harcourt 2012
Houghton Mifflin Harcourt Publisher Response | CommonCrawl |
Large-Scale Group Decision Making: A Systematic Review and a Critical Analysis
Diego García-Zamora, Álvaro Labella, Weiping Ding, Rosa M. Rodríguez, Luis Martínez
2022, 9(6): 949-966. doi: 10.1109/JAS.2022.105617
The society in the digital transformation era demands new decision schemes such as e-democracy or based on social media. Such novel decision schemes require the participation of many experts/decision makers/stakeholders in the decision processes. As a result, large-scale group decision making (LSGDM) has attracted the attention of many researchers in the last decade and many studies have been conducted in order to face the challenges associated with the topic. Therefore, this paper aims at reviewing the most relevant studies about LSGDM, identifying the most profitable research trends and analyzing them from a critical point of view. To do so, the Web of Science database has been consulted by using different searches. From these results a total of 241 contributions were found and a selection process regarding language, type of contribution and actual relation with the studied topic was then carried out. The 87 contributions finally selected for this review have been analyzed from four points of view that have been highly remarked in the topic, such as the preference structure in which decision-makers' opinions are modeled, the group decision rules used to define the decision making process, the techniques applied to verify the quality of these models and their applications to real world problems solving. Afterwards, a critical analysis of the main limitations of the existing proposals is developed. Finally, taking into account these limitations, new research lines for LSGDM are proposed and the main challenges are stressed out.
D. García-Zamora, Á. Labella, W. Ding, R. M. Rodríguez, and L. Martínez, "Large-scale group decision making: A systematic review and a critical analysis," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 949–966, Jun. 2022. doi: 10.1109/JAS.2022.105617.
Wearable Robots for Human Underwater Movement Ability Enhancement: A Survey
Haisheng Xia, Muhammad Alamgeer Khan, Zhijun Li, MengChu Zhou
Abstract(6469) HTML (28) PDF(82) [Cited by] ()
Underwater robot technology has shown impressive results in applications such as underwater resource detection. For underwater applications that require extremely high flexibility, robots cannot replace skills that require human dexterity yet, and thus humans are often required to directly perform most underwater operations. Wearable robots (exoskeletons) have shown outstanding results in enhancing human movement on land. They are expected to have great potential to enhance human underwater movement. The purpose of this survey is to analyze the state-of-the-art of underwater exoskeletons for human enhancement, and the applications focused on movement assistance while excluding underwater robotic devices that help to keep the temperature and pressure in the range that people can withstand. This work discusses the challenges of existing exoskeletons for human underwater movement assistance, which mainly includes human underwater motion intention perception, underwater exoskeleton modeling and human-cooperative control. Future research should focus on developing novel wearable robotic structures for underwater motion assistance, exploiting advanced sensors and fusion algorithms for human underwater motion intention perception, building up a dynamic model of underwater exoskeletons and exploring human-in-the-loop control for them.
H. S. Xia, M. A. Khan, Z. J. Li, and M. C. Zhou, "Wearable robots for human underwater movement ability enhancement: A survey," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 967–977, Jun. 2022. doi: 10.1109/JAS.2022.105620.
Variance-Constrained Filtering Fusion for Nonlinear Cyber-Physical Systems With the Denial-of-Service Attacks and Stochastic Communication Protocol
Hang Geng, Zidong Wang, Yun Chen, Xiaojian Yi, Yuhua Cheng
In this paper, a new filtering fusion problem is studied for nonlinear cyber-physical systems under error-variance constraints and denial-of-service attacks. To prevent data collision and reduce communication cost, the stochastic communication protocol is adopted in the sensor-to-filter channels to regulate the transmission order of sensors. Each sensor is allowed to enter the network according to the transmission priority decided by a set of independent and identically-distributed random variables. From the defenders' view, the occurrence of the denial-of-service attack is governed by the randomly Bernoulli-distributed sequence. At the local filtering stage, a set of variance-constrained local filters are designed where the upper bounds (on the filtering error covariances) are first acquired and later minimized by appropriately designing filter parameters. At the fusion stage, all local estimates and error covariances are combined to develop a variance-constrained fusion estimator under the federated fusion rule. Furthermore, the performance of the fusion estimator is examined by studying the boundedness of the fused error covariance. A simulation example is finally presented to demonstrate the effectiveness of the proposed fusion estimator.
H. Geng, Z. D. Wang, Y. Chen, X. J. Yi, and Y. H. Cheng, "Variance-constrained filtering fusion for nonlinear cyber-physical systems with the denial-of-service attacks and stochastic communication protocol," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 978–989, Jun. 2022. doi: 10.1109/JAS.2022.105623.
A Scalable Adaptive Approach to Multi-Vehicle Formation Control with Obstacle Avoidance
Xiaohua Ge, Qing-Long Han, Jun Wang, Xian-Ming Zhang
2022, 9(6): 990-1004. doi: 10.1109/JAS.2021.1004263
Abstract(722) HTML (220) PDF(147) [Cited by] ()
This paper deals with the problem of distributed formation tracking control and obstacle avoidance of multi-vehicle systems (MVSs) in complex obstacle-laden environments. The MVS under consideration consists of a leader vehicle with an unknown control input and a group of follower vehicles, connected via a directed interaction topology, subject to simultaneous unknown heterogeneous nonlinearities and external disturbances. The central aim is to achieve effective and collision-free formation tracking control for the nonlinear and uncertain MVS with obstacles encountered in formation maneuvering, while not demanding global information of the interaction topology. Toward this goal, a radial basis function neural network is used to model the unknown nonlinearity of vehicle dynamics in each vehicle and repulsive potentials are employed for obstacle avoidance. Furthermore, a scalable distributed adaptive formation tracking control protocol with a built-in obstacle avoidance mechanism is developed. It is proved that, with the proposed protocol, the resulting formation tracking errors are uniformly ultimately bounded and obstacle collision avoidance is guaranteed. Comprehensive simulation results are elaborated to substantiate the effectiveness and the promising collision avoidance performance of the proposed scalable adaptive formation control approach.
X. Ge, Q.-L. Han, J. Wang, and X.-M. Zhang, "A scalable adaptive approach to multi-vehicle formation control with obstacle avoidance," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 990–1004, Jun. 2022. doi: 10.1109/JAS.2021.1004263.
Fixed-Time Lyapunov Criteria and State-Feedback Controller Design for Stochastic Nonlinear Systems
Huifang Min, Shengyuan Xu, Baoyong Zhang, Qian Ma, Deming Yuan
This paper investigates the fixed-time stability theorem and state-feedback controller design for stochastic nonlinear systems. We propose an improved fixed-time Lyapunov theorem with a more rigorous and reasonable proof procedure. In particular, an important corollary is obtained, which can give a less conservative upper-bound estimate of the settling time. Based on the backstepping technique and the addition of a power integrator method, a state-feedback controller is skillfully designed for a class of stochastic nonlinear systems. It is proved that the proposed controller can render the closed-loop system fixed-time stable in probability with the help of the proposed fixed-time stability criteria. Finally, the effectiveness of the proposed controller is demonstrated by simulation examples and comparisons.
H. F. Min, S. Y. Xu, B. Y. Zhang, Q. Ma, and D. M. Yuan, "Fixed-time Lyapunov criteria and state-feedback controller design for stochastic nonlinear systems," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1005–1014, Jun. 2022. doi: 10.1109/JAS.2022.105539.
A Telepresence-Guaranteed Control Scheme for Teleoperation Applications of Transferring Weight-Unknown Objects
Jinfei Hu, Zheng Chen, Xin Ma, Han Lai, Bin Yao
Currently, most teleoperation work is focusing on scenarios where slave robots interact with unknown environments. However, in some fields such as medical robots or rescue robots, the other typical teleoperation application is precise object transportation. Generally, the object's weight is unknown yet essential for both accurate control of the slave robot and intuitive perception of the human operator. However, due to high cost and limited installation space, it is unreliable to employ a force sensor to directly measure the weight. Therefore, in this paper, a control scheme free of force sensor is proposed for teleoperation robots to transfer a weight-unknown object accurately. In this scheme, the workspace mapping between master and slave robot is firstly established, based on which, the operator can generate command trajectory on-line by operating the master robot. Then, a slave controller is designed to follow the master command closely and estimate the object's weight rapidly, accurately and robust to unmodeled uncertainties. Finally, for the sake of telepresence, a master controller is designed to generate force feedback to reproduce the estimated weight of the object. In the end, comparative experiments show that the proposed scheme can achieve better control accuracy and telepresence, with accurate force feedback generated in only 500 ms.
J. F. Hu, Z. Chen, X. Ma, H. Lai, and B. Yao, "A telepresence-guaranteed control scheme for teleoperation applications of transferring weight-unknown objects," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1015–1025, Jun. 2022. doi: 10.1109/JAS.2022.105626.
Fuzzy Set-Membership Filtering for Discrete-Time Nonlinear Systems
Jingyang Mao, Xiangyu Meng, Derui Ding
Abstract(343) HTML (119) PDF(64) [Cited by] ()
In this article, the problem of state estimation is addressed for discrete-time nonlinear systems subject to additive unknown-but-bounded noises by using fuzzy set-membership filtering. First, an improved T-S fuzzy model is introduced to achieve highly accurate approximation via an affine model under each fuzzy rule. Then, compared to traditional prediction-based ones, two types of fuzzy set-membership filters are proposed to effectively improve filtering performance, where the structure of both filters consists of two parts: prediction and filtering. Under the locally Lipschitz continuous condition of membership functions, unknown membership values in the estimation error system can be treated as multiplicative noises with respect to the estimation error. Real-time recursive algorithms are given to find the minimal ellipsoid containing the true state. Finally, the proposed optimization approaches are validated via numerical simulations of a one-dimensional and a three-dimensional discrete-time nonlinear systems.
J. Mao, X. Meng, and D. Ding, "Fuzzy set-membership filtering for discrete-time nonlinear systems," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1026–1036, Jun. 2022. doi: 10.1109/JAS.2022.105416.
Distributed Fault-Tolerant Consensus Tracking of Multi-Agent Systems Under Cyber-Attacks
Chun Liu, Bin Jiang, Xiaofan Wang, Huiliao Yang, Shaorong Xie
This paper investigates the distributed fault-tolerant consensus tracking problem of nonlinear multi-agent systems with general incipient and abrupt time-varying actuator faults under cyber-attacks. First, a decentralized unknown input observer is established to estimate relative states and actuator faults. Second, the estimated and output neighboring information is combined with distributed fault-tolerant consensus tracking controllers. Criteria of reaching leader-following exponential consensus tracking of multi-agent systems under both connectivity-maintained and connectivity-mixed attacks are derived with average dwelling time, attack frequency, and attack activation rate technique, respectively. Simulation example verifies the effectiveness of the fault-tolerant consensus tracking algorithm.
C. Liu, B. Jiang, X. F. Wang, H. L. Yang, and S. R. Xie, "Distributed fault-tolerant consensus tracking of multi-agent systems under cyber-attacks," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1037–1048, Jun. 2022. doi: 10.1109/JAS.2022.105419.
Exponential Continuous Non-Parametric Neural Identifier With Predefined Convergence Velocity
Mariana Ballesteros, Rita Q. Fuentes-Aguilar, Isaac Chairez
This paper addresses the design of an exponential function-based learning law for artificial neural networks (ANNs) with continuous dynamics. The ANN structure is used to obtain a non-parametric model of systems with uncertainties, which are described by a set of nonlinear ordinary differential equations. Two novel adaptive algorithms with predefined exponential convergence rate adjust the weights of the ANN. The first algorithm includes an adaptive gain depending on the identification error which accelerated the convergence of the weights and promotes a faster convergence between the states of the uncertain system and the trajectories of the neural identifier. The second approach uses a time-dependent sigmoidal gain that forces the convergence of the identification error to an invariant set characterized by an ellipsoid. The generalized volume of this ellipsoid depends on the upper bounds of uncertainties, perturbations and modeling errors. The application of the invariant ellipsoid method yields to obtain an algorithm to reduce the volume of the convergence region for the identification error. Both adaptive algorithms are derived from the application of a non-standard exponential dependent function and an associated controlled Lyapunov function. Numerical examples demonstrate the improvements enforced by the algorithms introduced in this study by comparing the convergence settings concerning classical schemes with non-exponential continuous learning methods. The proposed identifiers overcome the results of the classical identifier achieving a faster convergence to an invariant set of smaller dimensions.
M. Ballesteros, R. Q. Fuentes-Aguilar, and I. Chairez, "Exponential continuous non-parametric neural identifier with predefined convergence velocity," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1049–1060, Jun. 2022. doi: 10.1109/JAS.2022.105650.
Exploring Image Generation for UAV Change Detection
Xuan Li, Haibin Duan, Yonglin Tian, Fei-Yue Wang
Change detection (CD) is becoming indispensable for unmanned aerial vehicles (UAVs), especially in the domain of water landing, rescue and search. However, even the most advanced models require large amounts of data for model training and testing. Therefore, sufficient labeled images with different imaging conditions are needed. Inspired by computer graphics, we present a cloning method to simulate inland-water scene and collect an auto-labeled simulated dataset. The simulated dataset consists of six challenges to test the effects of dynamic background, weather, and noise on change detection models. Then, we propose an image translation framework that translates simulated images to synthetic images. This framework uses shared parameters (encoder and generator) and 22 × 22 receptive fields (discriminator) to generate realistic synthetic images as model training sets. The experimental results indicate that: 1) different imaging challenges affect the performance of change detection models; 2) compared with simulated images, synthetic images can effectively improve the accuracy of supervised models.
X. Li, H. B. Duan, Y. L. Tian, and F.-Y. Wang, "Exploring image generation for UAV change detection," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1061–1072, Jun. 2022. doi: 10.1109/JAS.2022.105629.
Adaptive Control With Guaranteed Transient Behavior and Zero Steady-State Error for Systems With Time-Varying Parameters
Hefu Ye, Yongduan Song
It is nontrivial to achieve global zero-error regulation for uncertain nonlinear systems. The underlying problem becomes even more challenging if mismatched uncertainties and unknown time-varying control gain are involved, yet certain performance specifications are also pursued. In this work, we present an adaptive control method, which, without the persistent excitation (PE) condition, is able to ensure global zero-error regulation with guaranteed output performance for parametric strict-feedback systems involving fast time-varying parameters in the feedback path and input path. The development of our control scheme benefits from generalized
\begin{document}${\boldsymbol{t}}$\end{document}
-dependent and
${\boldsymbol{x}}$
-dependent functions, a novel coordinate transformation and "congelation of variables" method. Both theoretical analysis and numerical simulation verify the effectiveness and benefits of the proposed method.
H. F. Ye and Y. D. Song, "Adaptive control with guaranteed transient behavior and zero steady-state error for systems with time-varying parameters," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1073–1082, Jun. 2022. doi: 10.1109/JAS.2022.105608.
A Triangulation-Based Visual Localization for Field Robots
James Liang, Yuxing Wang, Yingjie Chen, Baijian Yang, Dongfang Liu
J. Liang, Y. X. Wang, Y. J. Chen, B. J. Yang, and D. F. Liu, "A triangulation-based visual localization for field robots," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1083–1086, Jun. 2022. doi: 10.1109/JAS.2022.105632.
Loop Closure Detection With Reweighting NetVLAD and Local Motion and Structure Consensus
Kaining Zhang, Jiayi Ma, Junjun Jiang
K. N. Zhang, J. Y. Ma, and J. J. Jiang, "Loop closure detection with reweighting NetVLAD and local motion and structure consensus," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1087–1090, Jun. 2022. doi: 10.1109/JAS.2022.105635.
Multiview Locally Linear Embedding for Spectral-Spatial Dimensionality Reduction of Hyperspectral Imagery
Haochen Ji, Zongyu Zuo
H. C. Ji and Z. Y. Zuo, "Multiview locally linear embedding for spectral-spatial dimensionality reduction of hyperspectral imagery," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1091–1094, Jun. 2022. doi: 10.1109/JAS.2022.105638.
A Linear Algorithm for Quantized Event-Triggered Optimization Over Directed Networks
Yang Yuan, Liyu Shi, Wangli He
Y. Yuan, L. Y. Shi, and W. L. He, "A linear algorithm for quantized event-triggered optimization over directed networks," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1095–1098, Jun. 2022. doi: 10.1109/JAS.2022.105614.
Attack-Resilient Control Against FDI Attacks in Cyber-Physical Systems
Bo Chen, Yawen Tan, Zhe Sun, Li Yu
B. Chen, Y. W. Tan, Z. Sun, and L. Yu, "Attack-resilient control against FDI attacks in cyber-physical systems," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1099–1102, Jun. 2022. doi: 10.1109/JAS.2022.105641.
Encoding-Decoding-Based Recursive Filtering for Fractional-Order Systems
Bo Jiang, Hongli Dong, Yuxuan Shen, Shujuan Mu
B. Jiang, H. L. Dong, Y. X. Shen, and S. J. Mu, "Encoding-decoding-based recursive filtering for fractional-order systems," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1103–1106, Jun. 2022. doi: 10.1109/JAS.2022.105644.
Model Controlled Prediction: A Reciprocal Alternative of Model Predictive Control
Shen Li, Yang Liu, Xiaobo Qu
S. Li, Y. Liu, and X. B. Qu, "Model controlled prediction: A reciprocal alternative of model predictive control," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1107–1110, Jun. 2022. doi: 10.1109/JAS.2022.105611.
Part Decomposition and Refinement Network for Human Parsing
Lu Yang, Zhiwei Liu, Tianfei Zhou, Qing Song
L. Yang, Z. W. Liu, T. F. Zhou, and Q. Song, "Part decom- position and refinement network for human parsing," IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1111–1114, Jun. 2022. doi: 10.1109/JAS.2022.105647. | CommonCrawl |
After a gymnastics meet, each gymnast shook hands once with every gymnast on every team (except herself). Afterwards, a coach came down and only shook hands with each gymnast from her own team. There were a total of 281 handshakes. What is the fewest number of handshakes the coach could have participated in?
The number of gymnasts is some integer $n$, so that the number of gymnast-gymnast handshakes is ${n \choose 2}$ for some $n$. Also, the coach must participate in an integer $k<n$ number of handshakes. So, ${n \choose 2} + k = 281$. If we want to minimize $k$, we need the maximal $n$ such that ${n \choose 2} \le 281$, which implies $\frac{n(n-1)}{2} \le 281$ or $n^2 - n - 562 \le 0 $. So, the maximal $n$ is 24. So, $k = 281 - {24 \choose 2} = 281 - 12 \cdot 23 = 281 - 276 = \boxed{5}$. | Math Dataset |
In triangle $ABC$, medians $\overline{AD}$ and $\overline{BE}$ are perpendicular. If $AC = 22$ and $BC = 31$, then find $AB$.
We have that $D$ and $E$ are the midpoints of $\overline{BC}$ and $\overline{AC}$, respectively, so
\[\overrightarrow{D} = \frac{\overrightarrow{B} + \overrightarrow{C}}{2} \quad \text{and} \quad \overrightarrow{E} = \frac{\overrightarrow{A} + \overrightarrow{C}}{2}.\][asy]
unitsize(0.2 cm);
pair A, B, C, D, E;
B = (0,0);
C = (31,0);
A = intersectionpoint(arc(B,17,0,180),arc(C,22,0,180));
D = (B + C)/2;
E = (A + C)/2;
draw(A--B--C--cycle);
draw(A--D);
draw(B--E);
label("$A$", A, N);
label("$B$", B, SW);
label("$C$", C, SE);
label("$D$", D, S);
label("$E$", E, NE);
[/asy]
Also, $\overrightarrow{AD} \cdot \overrightarrow{BE} = 0$, or
\[\left( \overrightarrow{A} - \frac{\overrightarrow{B} + \overrightarrow{C}}{2} \right) \cdot \left( \overrightarrow{B} - \frac{\overrightarrow{A} + \overrightarrow{C}}{2} \right) = 0.\]Multiplying each factor by 2 to get rid of fractions, we get
\[(2 \overrightarrow{A} - \overrightarrow{B} - \overrightarrow{C}) \cdot (2 \overrightarrow{B} - \overrightarrow{A} - \overrightarrow{C}) = 0.\]Expanding the dot product, we get
\[-2 \overrightarrow{A} \cdot \overrightarrow{A} - 2 \overrightarrow{B} \cdot \overrightarrow{B} + \overrightarrow{C} \cdot \overrightarrow{C} + 5 \overrightarrow{A} \cdot \overrightarrow{B} - \overrightarrow{A} \cdot \overrightarrow{C} - \overrightarrow{B} \cdot \overrightarrow{C} = 0.\]Setting the circumcenter of triangle $ABC$ to be the origin, and using what we know about these dot products, like $\overrightarrow{A} \cdot \overrightarrow{B} = R^2 - \frac{c^2}{2}$, we get
\[-2R^2 - 2R^2 + R^2 + 5 \left( R^2 - \frac{c^2}{2} \right) - \left( R^2 - \frac{b^2}{2} \right) - \left( R^2 - \frac{a^2}{2} \right) = 0.\]This simplifies to $a^2 + b^2 = 5c^2$.
We are given that $a = 31$ and $b = 22$, so $5c^2 = 31^2 + 22^2 = 1445$, and $c = \boxed{17}$. | Math Dataset |
Detecting central fixation by means of artificial neural networks in a pediatric vision screener using retinal birefringence scanning
Boris I. Gramatikov1
Reliable detection of central fixation and eye alignment is essential in the diagnosis of amblyopia ("lazy eye"), which can lead to blindness. Our lab has developed and reported earlier a pediatric vision screener that performs scanning of the retina around the fovea and analyzes changes in the polarization state of light as the scan progresses. Depending on the direction of gaze and the instrument design, the screener produces several signal frequencies that can be utilized in the detection of central fixation. The objective of this study was to compare artificial neural networks with classical statistical methods, with respect to their ability to detect central fixation reliably.
A classical feedforward, pattern recognition, two-layer neural network architecture was used, consisting of one hidden layer and one output layer. The network has four inputs, representing normalized spectral powers at four signal frequencies generated during retinal birefringence scanning. The hidden layer contains four neurons. The output suggests presence or absence of central fixation. Backpropagation was used to train the network, using the gradient descent algorithm and the cross-entropy error as the performance function. The network was trained, validated and tested on a set of controlled calibration data obtained from 600 measurements from ten eyes in a previous study, and was additionally tested on a clinical set of 78 eyes, independently diagnosed by an ophthalmologist.
In the first part of this study, a neural network was designed around the calibration set. With a proper architecture and training, the network provided performance that was comparable to classical statistical methods, allowing perfect separation between the central and paracentral fixation data, with both the sensitivity and the specificity of the instrument being 100%. In the second part of the study, the neural network was applied to the clinical data. It allowed reliable separation between normal subjects and affected subjects, its accuracy again matching that of the statistical methods.
With a proper choice of a neural network architecture and a good, uncontaminated training data set, the artificial neural network can be an efficient classification tool for detecting central fixation based on retinal birefringence scanning.
Amblyopia ("lazy eye") is poor development of vision from prolonged suppression in an otherwise normal eye, and is a major public health problem, with impairment estimated to afflict up to 3.6% of children—and more in medically underserved populations [1]. Reliable detection of eye alignment with central fixation (CF) is essential in the diagnosis of amblyopia. Further, there is a need for a commercially available and widely accepted automated screening instrument that can reliably detect strabismus and defocus in young subjects [2]. Our laboratory has been developing novel technologies for detecting accurate eye alignment directly, by exploiting the birefringence (a property that changes the polarization state of light) of the uniquely arranged nerve fibers (Henle fibers) surrounding the fovea. We employed retinal birefringence scanning (RBS), a technique that uses the changes in the polarization of light returning from the eye, to detect the projection into space of the array of Henle fibers surrounding the fovea [3–5]. In RBS, polarized near-infrared light is directed onto the retina in a circular scan, with a fixation point in the center, and the polarization-related changes in light retro-reflected from the ocular fundus are analyzed by means of differential polarization detection. Due to the radially symmetric arrangement of the birefringent Henle fibers, a characteristic frequency appears in the obtained periodic signal when the scan is centered on the fovea, indicating central fixation. By analyzing frequencies in the RBS signal from both eyes simultaneously, the goodness of eye alignment can be measured, and thus strabismus (misaligned eyes) can be detected. RBS technology is the only known technology that can detect central fixation remotely using true anatomical information (position of the fovea). An early version of the "pediatric vision screener" (PVS) was designed in our lab and then tested at the Boston Children's Hospital, [6–10]. This prototype device has been developed into a commercial instrument that detects eye alignment (REBIScan, Boston, MA, USA).
Meanwhile, development of the RBS technology has continued in our lab, resulting in a series of central fixation detecting devices with no moving parts [11, 12], devices for continuous monitoring of fixation [13], a device for biometric purposes [14], and ultimately an improved PVS that combines "wave-plate-enhanced RBS" [15], or "polarization-modulated RBS" [16, 17], for detecting strabismus, with added technology for assessing proper focus of both eyes simultaneously. Polarization-modulated RBS is an optimized upgrade of RBS, based upon our theoretical and experimental research and computer modeling, using a spinning half wave plate (HWP) and a fixed wave plate (WP) to yield high and uniform signals across the entire population. In addition, using a technique named "phase-shift-subtraction" (PhSS), the new PVS eliminated the need for initial background measurement [15–17].
Depending on the direction of gaze and the design of the instrument, the screener produces several signal frequencies that can be utilized in the detection of central fixation. Using a computer model involving all polarization-changing components of the system, including the Henle fibers and the cornea, we found that by spinning the HWP 9/16-ths as fast as the circular scan, strong signals are generated that are odd multiples of half of the scanning frequency [17]. With central fixation, two frequency components predominate the RBS signal: 2.5 or 6.5 times the scanning frequency f s, depending on the corneal birefringence. With paracentral fixation, these frequencies practically disappear, being replaced by 3.5 f s and 5.5 f s. Therefore, the relative strengths of these four frequency components in the RBS signal distinguishes between central and paracentral fixation. In addition, a strong, spin-generated, 4.5 f s frequency in our RBS signal that is practically independent of corneal birefringence and of the position of the scanning circle with respect to the center of the fovea [16]. This "spin-generated frequency" is thus well suited for normalization of the signal, in order to limit the subject-to-subject variability.
The PVS instrument design has been described in detail elsewhere, and encouraging results have been reported [16–18]. We validated the performance of this research instrument on an initial group of young test subjects—18 patients with known vision abnormalities (8 male and 10 female), ages 4–25 (only one above 18), and 19 control subjects with proven lack of vision issues. Four statistical methods were used to derive decision making rules that would best separate patients with abnormalities from controls. Method 1 (termed "Simple threshold") employed gradual changing of an adaptive threshold θ for the normalized combined power at CF frequencies (P 2.5 + P 6.5 )/P 4.5 , in order to minimize the classification errors. Methods 2, 3 and 4 employed linear discriminant analysis, basically using a linear combination of respectively 2, 3 or 4 features (in our case normalized signal powers at different frequencies) to separate the two classes (CF vs para-CF). Ultimately, classification is based on a linear classifier involving the coefficients of a 2-, 3- or 4-way discriminant function. Sensitivity and specificity were calculated for each method [18]. The discriminant functions methods provided excellent specificity of 100%, but relatively low sensitivities of 90% or below. This meant that although all detected abnormalities would be true, at least 10% of the children with strabismus would be missed. For this reason we chose the "Simple threshold" (Method 1) which on the calibration data gave sensitivity of 99.17% and specificity of 96.25%.
The objective of the present study was, based on the characteristic signal frequencies mentioned above, to develop and test an artificial neural network (ANN) for the detection of central fixation, and to compare it with the classical statistical methods, reported earlier.
Artificial neural networks have quite often been used for diagnostic purposes in the past two–three decades. Applications include the diagnosis of myocardial infarction [19], waveform analysis of biomedical signals, medical imaging analysis and outcome prediction [20], automatic detection of diabetic retinopathy [21], nephritis and heart disease [22], biochemical data and heart sounds for valve diagnostics [23], and many more. There are reports in the literature relating to the use of artificial neural networks for eye tracking [24–28]. They are mostly used as part of a human–computer interface, or as an aid for the handicapped, and work typically with a camera which tracks the pupil using either infrared or visible light images of the pupil. Through proper training, neural networks can provide precise individual calibration. Such networks often employ on the order of 20–200 dimensions (hidden neurons), and can require significant computing time. They are accurate to approximately 0.75°. In our application, the signals are available as spectral powers at just a few frequencies, generated upon retinal birefringence scanning around the fovea. They allow the detection of central fixation with much higher precision (0.1°) without the need for full-range eye tracking or calibration, while allowing some head mobility. Applications of artificial neural networks for this purpose are unknown to the author.
The optics, electronics, and signal analysis of the PVS have been reported in more detail previously [16–18]. This present work focuses on the use of artificial neural networks as an alternative to classical statistical methods. The goal of this study was, if possible, to improve the classification algorithms, as well as to validate the performance of the research instrument on the same group of young test subjects that was used in the previous study, with the addition of two more subjects. All subjects' data were analysed with both the methods from the previous paper, and the neural networks method reported here. For more detail on the human subject data, please see the "Data" section below. The neural network performance is compared with the four statistical methods that were applied to the same dataset earlier, and the ability to separate patients with abnormalities from controls was investigated.
Artificial neural networks have been widely used, and the related theory has matured in the last three decades [29–35]. Feedforward neural networks (FNNs) are static, i.e. networks with no feedback elements and no delays. They are widely used to solve complex problems in pattern classification, system modeling and identification, and non-linear signal processing, and in analyzing non-liner multivariate data. One of the characteristics of the FNN is its learning (or training) ability [36]. It has a learning process in both hidden and output layers. By training, the FFNs can give correct answers not only for learned examples, but also for the models similar to the learned examples, showing their strong associative ability and rational ability which are suitable for solving large, nonlinear, and complex classification and function approximation problems. The classical method for training FNNs is the backpropagation (BP) algorithm [31] which is based on the gradient descent optimization technique.
Many tools have been developed for creating and testing ANN networks. Among them, probably the most significant one is the Neural Networks Toolbox for MATLAB from MathWorks, Inc. [37] The author employed this toolbox for creating, training, and testing the network with both calibration and clinical data. Another useful tool widely used in the field is the Netlab simulation software, designed to provide the central tools necessary for the simulation of theoretically well-founded neural network algorithms for use in teaching, research, and applications development. It consists of a library of MATLAB functions and scripts based on the approach and techniques described in the book Neural Networks for Pattern Recognition by Dr. Christopher Bishop [38], Department of Computer Science and Applied Mathematics at Aston University, Birmingham, UK.
Neural network architecture
For the present application, several ANN architectures were tested. To avoid overfitting (explained later under Generalization), a relatively simple architecture was selected (Fig. 1), consisting of one hidden layer and one output layer. This two-layer network has an input p containing four inputs (p1–p4), representing the normalized RBS spectral powers, respectively P2.5/P4.5, P3.5/P4.5, P5.5/P4.5, and P6.5/P4.5, that are generated during retinal birefringence scanning. The hidden layer contains four neurons. Each neuron is connected to each of the inputs through the input weight matrix IW:
Neural network architecture. A two-layer architecture has been employed, consisting of one hidden layer and one output layer. The network has an input p containing four inputs (p1–p4), representing the four normalized RBS spectral powers, respectively P2.5/P4.5, P3.5/P4.5, P5.5/P4.5, and P6.5/P4.5. The hidden layer contains four neurons. The output of the net signals presence or absence of central fixation
$${\mathbf{IW}} = \left[ {\begin{array}{*{20}c} {iw_{1,1} } & \cdots & {iw_{1,4} } \\ \vdots & \ddots & \vdots \\ {iw_{4,1} } & \cdots & {iw_{4,4} } \\ \end{array} } \right]$$
The i-th neuron has a summer that gathers its weighted inputs iwi,j and a bias b1,i, to form its scalar output ni as:
$${\text{n}}_{\text{i}} = {\text{ iw}}_{{{\text{i}}, 1}} {\text{p}}_{ 1} + {\text{ iw}}_{\text{i,2}} {\text{p}}_{ 2} + {\text{ iw}}_{{{\text{i}}, 3}} {\text{p}}_{ 3} + {\text{ iw}}_{{{\text{i}}, 4}} {\text{p}}_{ 4} + {\text{ b}}_{{ 1,{\text{i}}}}$$
equivalent to a dot-product (inner product):
$${\mathbf{n}} = {\mathbf{IW}}*{\mathbf{p}} + \varvec{b}_{1}$$
where b 1 is a four-element vector representing the four biases, one for each neuron.
Each ni then is processed by a sigmoid transfer function f1 to deliver a neuron output ai. The 4-element output vector of the four neurons (and the hidden layer as a whole) can be represented in matrix form as:
$${\mathbf{a}} = {\text{f}}^{1} ({\mathbf{IW}}*{\mathbf{p}} + \varvec{b}_{1} )$$
The four neuron outputs are then fed to the output layer, which has a neuronal structure as well. Its scalar output y can be represented by the equation:
$${\text{y }} = {\text{ f}}^{ 2} \left( {{\mathbf{LW}}*{\mathbf{a}} + {\text{ b}}_{ 2} } \right) \, = {\text{ f}}^{ 2} \left\{ {{\mathbf{LW}}{\text{ f}}^{ 1} \left( {{\mathbf{IW}}*{\mathbf{p}} + {\mathbf{b}}_{{\mathbf{1}}} } \right) + {\text{b}}_{ 2} } \right\}$$
where LW is the output layer weight matrix and the scalar b2 is the output neuron's bias:
$${\mathbf{LW}} = \, \left[ {{\text{lw}}_{ 1} {\text{ lw}}_{ 2} {\text{ lw}}_{ 3} {\text{ lw}}_{ 4} } \right]$$
The weights and biases were calculated during the training of the network, as explained later. The sigmoid transfer functions for the hidden layer f1 and for the output layer f2 were chosen to be the same, namely of type Log-Sigmoid transfer function (logsig):
$$\tt{logsig}\left( \texttt{{n}} \right)= \texttt{1}/\left( {\text{\tt{1 + exp}}\left( { - {\texttt{n}}} \right)} \right)$$
The function logsig generates outputs between 0 and 1 as the neuron's net input goes from negative to positive infinity. As mentioned above, the neural network shown in Fig. 1 is a FFN. Feedforward networks consist of a series of layers. The first layer has a connection from the network input. Each subsequent layer has a connection from the previous layer. The final layer produces the network's output. FFNs can be used for any kind of input to output mapping. A feedforward network with one hidden layer, and enough neurons in the hidden layers, can fit any finite input–output mapping problem. It can be used as a general function approximator. It can approximate, arbitrarily well, any function with a finite number of discontinuities, given sufficient a number of neurons in the hidden layer. Specialized versions of the feedforward network include fitting and pattern recognition networks. The pattern recognition networks are the ANN of choice when solving classification problems, such as ours. In pattern recognition problems, we want a neural network to classify inputs into a set of target categories. Thus, pattern recognition networks are FFNs that can be trained to classify inputs according to already verified target classes, in our case verified central fixation versus paracentral fixation.
Creating the neural network
Following the above reasoning, our neural network was created (as a network object) using the MATLAB Toolbox's pattern recognition network creation function patternnet:
$${\texttt{net} = \tt{patternnet}}\left( {{\texttt{hiddenLayerSize}, \tt{trainFcn}}} \right);$$
The parameter hiddenLayerSize here is 4, corresponding to the four neurons in the hidden layer. One can change the number of neurons if the network does not perform well after training, and then retrain. The parameter trainFcn defines the training function, which in our case is 'trainscg', standing for the scaled conjugate gradient backpropagation method for updating weight and bias values during training [39]. It performed slightly better on our data than the popular and faster Levenberg–Marquardt (LM) training algorithm [40]. Backpropagation (explained below) is used to calculate derivatives of performance perf with respect to the weight and bias variables.
To define a pattern recognition problem, data is generally arranged in a set of Q input vectors (measurements) as columns in a matrix. Then another set of Q target vectors is arranged, so that they indicate the classes to which the input vectors are assigned. Classification problems involving only two classes (as in our case) can be represented by target vectors consisting of either scalar 1/0 elements, which is the format used in this study. Alternatively, the target could be represented by two-element vectors, with one element being 1 and the other element being 0. In the general case, the target data for pattern recognition networks should consist of vectors of all zero values except for a 1 in element i, where i is the class they represent.
The calibration data were comprised of the same data set that was used in an earlier study [18]. Briefly, with the pediatric vision screener [16–18] we recorded signals from five asymptomatic normal volunteers, ages 10, 18, 24, 29 and 39, two female and three male, of them three Caucasian, one African American, and one Asian, all properly consented. The subjects were asked to look first at the blinking target in the center of the scanning circle, for central fixation (CF). Twelve measurements of duration 1 s were taken in order to obtain representative data while taking into consideration also factors like fixation instability and distractibility. The calculated FFT powers for each measurement were saved on disk. The same type of measurement was repeated with each of the subjects looking at imaginary "targets" on the scanning circle (1.5° off center) at 12, 3, 6, and 9o'clock. The spacing was chosen such that there would be a sufficient distance between the targets, to avoid confusion in the test subject, and to overcome the natural instability of fixation. More fixation points or more than 12 measurements per target have proven to diminish the efficiency of data collection, because of fatigue occurring in the test subjects. The data, consisting of powers P 2.5 , P 6.5 , P 3.5 , P 5.5 and P 4.5 for each of the 12 measurements of each eye of all five test subjects, were bundled into two groups: a group for central fixation (120 "eyes," the "CF set") and a group for paracentral fixation (480 "eyes," the "para-CF set"). Data from these two controlled groups were used to create and calibrate the ANN. The data were organized as an input matrix of 4 rows and Q columns, with Q = 600 (120 measurements with CF and 480 measurements with para-CF). The target vector was a vector of length Q = 600, each element of which was either 1 (CF) or 0 (para-CF). These inputs and targets were used for training, validation and testing the network. One can reasonably argue that this number of subjects (5) and eyes (10) is insufficient for providing reliable calibration with regard to the two classes (CF versus para-CF). Yet, the variability of the RBS signals' waveforms (and respectively the five derived frequency powers) depend to a much higher extent on the subject's direction of gaze and the ability to fixate, than on the individual variability of the foveal and corneal birefringence. This invariability, especially to corneal birefringence, was achieved with the new design, as reported in our previous work [15–17]. The birefringence of the fovea is largely constant. It is the corneal birefringence that affects the signals. The cornea, acting as a retarder of a certain retardance and azimuth, influences the orientation of the polarization cross. In the design of the PVS [18], the corneal birefringence was compensated for by means of a wave plate (retarder), achieving broad uniformity across the population studied. The wave plate was optimized by means of a dynamic computer model of the retinal birefringence scanning system (including the retina and the cornea as part of a train of optical retarders) and based on the data from a database of 300 eyes [15].
The clinical data were also obtained with the pediatric vision screener (following an institutionally approved IRB protocol), and were almost identical with the data set used in [18], with the addition of just two more subjects, both of whom were independently verified by a pediatric ophthalmologist. Thus, the total was 39 test subjects: 19 properly consented patients with known abnormalities (9 male and 10 female, of which 12 Caucasian, 2 African American, and 5 Asian), ages 4–25 (only one above 18), and 20 control subjects with proven lack of vision issues (10 male and 10 female, of which 16 Caucasian, 1 African American, and 3 Asian), ages 2–37 (only 4 above 18), all properly consented. All were recruited from the patients of the Division of Pediatric Ophthalmology at the Wilmer Eye Institute, or the patients' siblings. All subjects underwent a vision exam by an ophthalmologist, during which eye alignment and refraction were tested. Eye alignment was tested by means of the cover test. First the unilateral cover test was performed. During the unilateral cover test, the patient is asked to focus on a distant object while the doctor covers each of the eyes in turn. If either of the uncovered eyes has to move to focus on the object, this may be evidence of strabismus. The second part of the exam is the alternating cover test. The patient is asked to focus on an object while the eye cover is switched from one eye to the other. If the doctor detects eye movement after the eye cover is removed, this is an indication of phoria (tendency to deviation of the eyes from the normal when fusional stimuli are absent). A significant amount of phoria can lead to eyestrain and/or double vision. On the whole, verified information was available from a total of 78 eyes. Data were organized as an input matrix of 4 rows and Q columns (Q = 78 eyes). The target vector was a vector of length Q (Q = 78), each element of which was either 1 (CF) or 0 (para-CF). These inputs and targets, as well as the network outputs, were used for testing the performance of the ANN and comparing it to the statistical methods reported earlier [18].
Preprocessing and postprocessing
Neural network training can be made more efficient if one performs certain preprocessing steps on the network inputs and targets [35, 37]. The sigmoid transfer functions that are generally used in the hidden layers become essentially saturated when the net input is greater than three. If this happens at the beginning of the training process, the gradients will be very small, and the network training will be very slow. It is standard practice to normalize the inputs before applying them to the network. Generally, the normalization step is applied to both the input vectors and the target vectors in the data set. The input processing functions used here are removeconstantrows (removes the rows of the input vector that correspond to input elements that always have the same value, because these input elements are not providing any useful information to the network), and mapminmax (normalize inputs/targets to fall in the range [−1,1]). For outputs, the same processing functions (removeconstantrows and mapminmax) are used. Output processing functions are used to transform user-provided target vectors for network use. Then, network outputs are reverse-processed using the same functions to produce output data with the same characteristics as the original user-provided targets.
Dividing the data
When training multilayer networks, the general practice is to first divide the data into three subsets. The first subset is the training set, which is used for computing the gradient and updating the network weights and biases. This set is presented to the network during training, and the network is adjusted according to its error. The second subset is the validation set. It is used to measure network generalization, and to halt training when generalization stops improving. The error on the validation set is monitored during the training process. The validation error normally decreases during the initial phase of training, as does the training set error. However, when the network begins to overfit the data, the error on the validation set typically begins to rise. The network weights and biases are saved at the minimum of the validation set error. The test set has no effect on training and so provides an independent measure of network performance during and after training. Test set error is not used during training, but it is used to compare different models. It is also useful to plot the test set error during the training process. If the error on the test set reaches a minimum at a significantly different iteration number than the validation set error, this might indicate a poor division of the data set.
In the Neural Networks Toolbox for MATLAB [37], there are four functions provided for dividing data into training, validation, and test sets. They are dividerand (divide data randomly, the default), divideblock (divide into contiguous blocks), divideint (use interleaved selection), and divideind (divide by index). The data division is normally performed automatically when the network is trained. In this study, the dividerand function was used, with 70% of the data randomly assigned to training, 15% of the data randomly assigned to validation, and 15% of the data randomly assigned to the test set. This is the default partitioning in Neural Networks Toolbox. The appropriateness of this division is discussed in the "Discussion and limitations" below.
Initializing weights (init)
Before training a feedforward network, one must initialize the weights and biases. The configure command automatically initializes the weights, but one might want to reinitialize them. This is done with the init command. This function takes a network object as input and returns a network object with all weights and biases initialized. Here is how a network is initialized (or reinitialized):
$$\text{\tt{net} = \tt{init}}\left( {\text{\tt{net}}} \right)\text{;}$$
Performance function
Once the network weights and biases are initialized, the network is ready for training. The training process requires a set of examples of proper network behavior—network inputs p and target outputs t. The process of training a neural network involves tuning the values of the weights and biases of the network to optimize network performance, as defined by the network performance function. The default performance function for feedforward networks is mean square error mse-the average squared error between the network outputs y and the target outputs t [37]. It is defined as follows:
$$F = mse = \frac{1}{N}\mathop \sum \limits_{i = 1}^{N} \left( {e_{i} } \right)^{2} = \frac{1}{N}\mathop \sum \limits_{i = 1}^{N} \left( {t_{i} + y_{i} } \right)^{2}$$
For a neural network classifier, during training one can use mean squared error or cross-entropy error, with cross-entropy error being considered slightly better [41]. We tested both methods on a subset of the data, and obtained slightly better results with the cross-entropy method. Which is why the network performance evaluation in this study was done by means of the cross-entropy method:
$${\texttt{net.performFcn}} = \lq\texttt{crossentropy}\hbox{'};$$
The MATLAB performance function has the following format:
$${\tt{perf}} = \tt{crossentropy}\left( \tt{net},\tt{targets},\tt{outputs},\tt{perfWeights} \right)$$
It calculates a network performance given targets (t) and outputs (y), with optional performance weights and other parameters. The function returns a result that heavily penalizes outputs that are extremely inaccurate (y near 1−t), with very little penalty for fairly correct classifications (y near t). Minimizing cross-entropy leads to good classifiers. The cross-entropy for each pair of output-target elements is calculated as:
$$\tt{ce} = {-}\tt{t}\; \tt{.^*} \;\tt{log} ( {\tt{y}} )$$
where .* denotes element-by-element multiplication. The aggregate cross-entropy performance is the mean of the individual values:
$${\tt{perf}} = \tt{sum}( {\tt{ce}} ( {\text{:}} ) )\tt{/numel} ( {\tt{ce}} )$$
In the special case of N = 1 (our case) when the output consists of only one element (y), the outputs and targets are interpreted as binary encoding. That is, there are two classes with targets of 0 and 1. The binary cross-entropy expression is:
$$\tt{ce} = {-}\tt{t}\; {\tt{.^*\;log}}\left( {\tt{y}} \right) - \left( {\tt{1 - t}} \right)\text{ }{\tt{.^* log}}\left( {\tt{1 - y}} \right)$$
where .* denotes element-by-element multiplication.
Training the network
For training multilayer feedforward networks, any standard numerical optimization algorithm can be used to optimize the performance function, but there are a few key ones that have shown excellent performance for neural network training. These optimization methods use either the gradient of the network performance with respect to the network weights, or the Jacobian of the network errors with respect to the weights. The gradient and the Jacobian are calculated using a technique called the backpropagation algorithm, which involves performing computations backward through the network. The backpropagation computation is derived using the chain rule of calculus and is described in more detail in [33] and in [31]. As a note on terminology, the term "backpropagation" is sometimes used to refer specifically to the gradient descent algorithm, when applied to neural network training. That terminology is not used here, since the process of computing the gradient and Jacobian by performing calculations backward through the network is applied in all of the training functions offered by MATLAB's Neural Networks Toolbox. It is clearer to use the name of the specific optimization algorithm that is being used (i.e. 'trainscg', 'trainlm', 'trainbr', etc.), rather than to use the term backpropagation alone.
Neural networks can be classified into static and dynamic categories. Static networks (which are essentially the FFNs) have no feedback elements and contain no delays; the output is calculated directly from the input through feedforward connections. In dynamic networks, the output depends not only on the current input to the network, but also on the current or previous inputs, outputs, or states of the network. These dynamic networks may be recurrent networks with feedback connections or feedforward networks with imbedded tapped delay lines (or a hybrid of the two) [34]. For static networks, the backpropagation algorithm is usually used to compute the gradient of the error function with respect to the network weights, which is needed for gradient-based training algorithms [42].
The actual training was completed using the function from MATLAB's Neural Networks Toolbox:
$$\left[ {\text{\tt{net,tr}}} \right]\text{ \tt{= train}}\left( {\text{\tt{net,x,t}}} \right)$$
with x being the input matrix (600 column vectors of size x), and t being the target vector of size 600 (total number of observations in the calibration set).
Network performance was calculated using the perform function
$$\text{\tt{performance = perform}}\left( {\text{\tt{net,t,y}}} \right)$$
which takes the network object, the targets t and the outputs y and returns performance using the network's performance function net.performFcn (crossentropy in our case). Note that training automatically stops when generalization stops improving, as indicated by an increase in the cross-entropy error of the validation samples.
Generalization
Neural networks are sensitive to the number of neurons in their hidden layers. Too few neurons can lead to underfitting. Too many neurons can contribute to overfitting, in which all training points are well fitted, but the fitting curve oscillates significantly between these points, and so do the calculated coefficients. In ANN terms, the model does not generalize well. It is apparent from testing with an increasing complexity that as the number of connections in the network increases, so does the propensity to overfit to the data. The phenomenon of overfitting can always be seen as we make our neural networks deep (complex).
In this study, the number of neurons in the hidden layer was chosen empirically. On the clinical data, less than four input neurons did not provide the accuracy achieved with 4-8 neurons in the hidden layer, most likely because of underfitting. At about 10 neurons and upwards, the accuracy started to decrease again, because of overfitting. The choice of 4 hidden neurons was made for two reasons: (a) keep the network generalized (i.e. to avoid overfitting), and (b) to keep it simple and computationally fast. With respect to the number of hidden layers, no significant improvement was achieved with a two-hidden-layer structure, regardless of the number of neurons in each layer.
MathWorks suggests several ways to improve network generalization and avoid overfitting [37, 43]. One method for improving network generalization is to use a network that is just large enough to provide an adequate fit. The larger network we use, the more complex the functions the network can create. If a small enough network is used, it will not have enough power to overfit the data. One can check the Neural Network Design example nnd11gn in [33] to investigate how reducing the size of a network can prevent overfitting. Another approach is retraining. Typically each backpropagation training session starts with different initial weights and biases, and different divisions of data into training, validation, and test sets. These different conditions can lead to quite different solutions for the same problem. Therefore, it is a good idea to train several networks, in order to ensure that a network with good generalization is found.
The default method for improving generalization is the so-called early stopping. This technique is automatically provided for all of the supervised network creation functions in the Neural Networks toolbox, including the backpropagation network creation functions such as feedforwardnet and patternnet. As explained before, in this technique the available data are divided into three subsets. The first subset is the training set, which is used for computing the gradient and updating the network weights and biases. The second subset is the validation set. The error on the validation set is monitored during the training process. The validation error normally decreases during the initial phase of training, as does the training set error. However, when the network begins to overfit the data, the error on the validation set typically begins to rise. When the validation error increases for a specified number of iterations (net.trainParam.max_fail), the training is stopped, and the weights and biases at the minimum of the validation error are returned. The test set error is not used during training, but it is used to compare different models. It is also useful to plot the test set error during the training process. If the error in the test set reaches a minimum at a significantly different iteration number than the validation set error, this might indicate a poor division of the data set [43].
There is yet another method for improving generalization, called regularization. It involves modifying the performance function, which is normally chosen (mse, cross-entropy, or other). Using a modified performance function causes the network to have smaller weights and biases, forcing the network response to be smoother and less likely to overfit [43]. Regularization can be done automatically by using the Bayesian regularization training function trainbr. This can be done by setting net.trainFcn to 'trainbr'. This will also automatically move any data in the validation set to the training set [37].
Network creation and training
The neural network was trained, validated, and tested on the calibration data of 600 eyes (explained in more detail under the "Data" subsection in "Methods" above). Figure 2 shows the NN training process (nntraintool). The upper part illustrates the network architecture, as shown in Fig. 1, this time generated by MATLAB. The tool shows the algorithms used, as well as the training progress. Training was stopped after iteration 26, at performance 0.548. The performance graph is presented in Fig. 3, showing how cross-entropy is minimized for good classification. Before epoch 26, the best validation performance of 0.562 was reached at epoch 25, which is only slightly higher than the final 0.548. Figure 4 shows the dynamics of the training state in terms of gradient of the cross-entropy, on a logarithmic scale. At the endpoint, the gradient was 9.6137 × 10−7, which can be considered a good value at which to stop for this set of data. The combined confusion matrix for the three kinds of data (train, validate, test) from the calibration set is presented in Fig. 5. In this figure, the first two diagonal cells (in green) show the number and percentage of correct classifications by the trained network. For example 480 measurements (in the target set of class 0) are correctly classified as paracentral fixation (output set of class 0). This corresponds to 80.0% of all 600 measurements. Similarly, 120 cases (in the target set of class 1) are correctly classified as central fixation (output set of class 1). This corresponds to 20.0% of all measurements. The other diagonal (red) represent the incorrect classifications, which are 0 for each target class and for each output class. The lower right blue square illustrates the overall accuracy.
Neural network training tool (nntraintool) representing the training process. The upper part illustrates the network architecture, as shown in Fig. 1, this time generated by MATLAB. Training was stopped after iteration 26, at performance 0.548
Validation performance, based on the cross-entropy error. Minimizing cross-entropy results in good classification. Lower values are better. Zero means no error
Dynamics of the neural network training state in terms of gradient of the cross-entropy, on a logarithmic scale. At the endpoint, the gradient was 9.6137 × 10−7
The combined confusion matrix for the three kinds of data (train, validate, test) from the calibration set. The green diagonal cells show the number and percentage of correct classifications by the trained network. The red diagonal represents the incorrect classifications, which are 0 for each target class and for each output class. The lower right blue square illustrates the overall accuracy
Overall, 100.0% of the predictions are correct and there are no wrong classifications. In terms of sensitivity and specificity, this corresponds to sensitivity = 100.0% and specificity = 100.0% (Table 1), and exceeds the results from our previous study [18], where none of the statistical methods applied to the same data reached this accuracy (please see columns CAL in Table 1). The reader should, however, be reminded, that this was achieved at a relatively small size of the training set, with the data having been provided by just 10 eyes from 5 patients.
Table 1 Performance of the artificial neural network compared with statistical methods with regard to classifying fixation as central versus paracentral fixation
Weights and biases of the ANN
After initialization and training, the weights for the hidden layer, contained in matrix IW, according to (1) above, and as extracted with MATLAB function cell2mat(net.IW), were:
$${\mathbf{IW}} = \left[ {\begin{array}{*{20}c} { 0. 8 3 4 3} & { - 1. 6 7 6 8} & { 0. 3 4 7 6} & { 0. 8 6 9 8} \\ { - 1. 5 3 2 0} & { 1. 3 4 1 7} & { 1. 8 0 4 5} & { - 1. 9 1 9 2} \\ { - 1. 0 2 7 1} & { 0. 6 3 2 9} & { 1. 4 7 5 5} & { - 1. 2 5 0 9} \\ { - 2. 3 1 8 3} & { 0. 6 2 7 0} & { 3. 2 9 6 7} & { - 5. 0 4 8 3} \\ \end{array} } \right]$$
The bias vector for the hidden layer, as accessed with function cell2mat(net.b(1)), was
$${\mathbf{b1}} = \begin{array}{*{20}c} {[ - 1. 6 6 7 1} & { - } 0. 1 4 5 7 & { - 0. 0 1 5 5} & { - 2. 3 6 4 8} \\ \end{array} ]$$
The output weights LW, as accessed with MATLAB function cell2mat(net.LW), were
$${\mathbf{LW}} = \left[ {\begin{array}{*{20}c} { 1. 5 3 7 9} & { - 2. 9 1 5 3} & { - 1. 8 2 4 5} & { - 1 1. 3 1 0 9} \\ \end{array} } \right]$$
Finally, the output bias, b2 = cell2mat(net.b(2)), a scalar, was
b2 = 0.3771
It should be noted that because of the random assignment of the data (training, validation, and test sets), the above weights and coefficients may vary somewhat. This, however, did not impact the sensitivity and specificity significantly. Nevertheless, we trained and retrained the network four times. With all four sessions, both the sensitivity and specificity for the calibration data remained 1.000. For the clinical data, the results from the session which maximized the sensitivity were chosen, because the main goal of this project was to develop a screening device for children, which should not miss lack of central fixation.
Networks with more neurons in the hidden layer, as well as networks with two hidden layers were also tested on the calibration dataset, but they did not improve performance. Their use was avoided because of the risk of potential overfitting.
Testing the ANN on the clinical data
Once the neural network was created, trained, validated, and tested on the calibration data, in a further step, it was tested on our data set of clinical data (described in detail above under the "Data" subsection in "Methods") consisting of a subset of strabismic eyes and a control subset of normal eyes, all obtained with the pediatric vision screener. Four normalized spectral powers from a total of 78 eyes were organized as an input matrix of 4 rows and Q columns (Q = 78). The target vector was a vector of length 78, each element of which was either 1 (CF) or 0 (para-CF). The four inputs for each subject were fed to the ANN, and the output was compared each time with the target, which in fact was the doctor's decision. This allowed the calculation of the sensitivity and specificity of the ANN when applied to the clinical data. Further, these results permitted a comparison between the performance of the ANN and the statistical methods reported earlier [18], such as the simple adaptive threshold that minimized the overall error, or 2-, 3- and 4-way linear discriminant analysis. The results are summarized in Table 1, columns SBJ (human subjects). The two new patients (4 eyes) were quite "tricky" adding two false negative decisions to the "Standard Threshold" method and just one false negative decision to the neural network's results. Again, the ANN performed slightly better than the other methods, with sensitivity of 0.9851 and specificity of 1.0000, with no false positive decisions and only one false negative decision. Generally, the discriminant-analysis-based methods showed lower sensitivity. Specificity on the clinical data was 1.0000 for all methods except for the 2-way discriminant analysis. Note that the only other method that used all four inputs separately is the 4-way discriminant analysis, giving a sensitivity of 0.9417 for the calibration data, and only 0.8507 for the clinical data. The excellent performance of the ANN is obviously due to the two-layer structure and to the nonlinear (sigmoid) transfer function at the output of each neuron, giving more flexibility, while the performance of all discriminant functions used in the previous study was strictly linear, resembling just one layer of neurons with a linear transfer function, not ideal for pattern recognition.
Dependence of the classification precision on the training data set
In order to assess the dependence of the classification precision on the selection of the training data set, we ran 10 sessions of calibration followed each time by diagnostic classification. Each time the available calibration data was assigned at random to the training (70%), validation (15%), and test (15%) subsets, as was done with the calibration data whose outcome was presented in Table 1 above. This was performed by means of the following code, executed during each run:
net.divideFcn = 'dividerand';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
For each run, the sensitivity and specificity of both the calibration and the patient data was calculated, and tabulated in Table 2. The results were somewhat different from the results in the first two columns of Table 1, indicating that the weights of the ANN and the resulting diagnostic precision do depend on the choice of the calibrations subset for training and validation. This also means that a larger calibration pool would have likely given more stable results. At the same time, the sensitivity and specificity did not vary too much, as demonstrated by the standard deviation numbers in the bottom row, indicating that a reasonable diagnostic precision has been reached. The average sensitivity for the clinical data was 0.9806, with the highest sensitivity being 1.0000 and the lowest sensitivity being 0.9552. For diagnostic screening it is the high sensitivity that matters more than high specificity. The former minimizes the number of missed abnormalities, while the latter minimizes the number of subjects falsely referred to the doctor.
Table 2 Ten runs of the analysis program (ANN only)
Discussion and limitations
Although MATLAB was used to create and train the network in this study, porting the code and the ANN weights and biases to another software platform, including an embedded system, is relatively straightforward. This is possible because: (a) the network architecture is known and transparent, (b) the weights and biases are available and can easily be accessed, (c) almost the entire MATLAB code is available as source code, and (d) after modeling in MATLAB and before finalizing the application, the testing can be performed on the target platform with the same data that was used to train the network in MATLAB.
Compared with the 4-way linear discriminant analysis, the ANN with 4 neurons in the hidden layer is more complex, providing its flexibility to differentiate somewhat better between the two classes.
One should acknowledge the statistical limits due to the relatively small number of the learning samples. We used 12 measurements per eye for each direction of gaze, and even though exactly the same 10 eyes from five patients were used for training the ANN, as were used for tuning the statistical methods, it would certainly take a significantly larger study, involving more subjects, to draw a decisive conclusion as to which approach is better. Despite this limitation, the study is a proof of concept of using ANNs for this analysis.
There are several issues that the NN user should have in mind, though. Despite the general success of the backpropagation algorithm, it may generally converge to a local minimum, for example, when the mean squared-error objective function (mse), or alternatively the cross-entropy is used, and requires a large number of learning iterations to adjust the weights of the FNN. Many attempts have been made to speed up the error BP algorithm. The most well-known algorithms of this type are the conjugate gradient training algorithm [39] (used here), and Levenberg–Marquardt (LM) training algorithm [40]. The computational complexity of the conjugate gradient algorithm (employed here) is heavily dependent on the line search methods. The LM algorithm has a faster speed than gradient training algorithm and hardly gets stuck in a local minimum. It, however, requires much more memory and computational time.
While two-layer feedforward networks can learn virtually any input–output relationship, feedforward networks with more layers might learn complex relationships more quickly. For most problems, it is best to start with two layers, and then increase to three layers, if the performance with two layers is not satisfactory.
Overfitting
As mentioned above, multilayer networks are capable of performing just about any linear or nonlinear computation, and they can approximate any reasonable function arbitrarily well. However, while the network being trained might theoretically be capable of performing correctly, backpropagation and its variations might not always find a solution [30, 31, 33, 37]. Fortunately, this was not the case with any of the ANN architectures that were tested in this study.
Important also is the linearity of the network. The error surface of a nonlinear network is more complex than the error surface of a linear network. Nonlinear transfer functions in multilayer networks introduce many local minima in the error surface [33, 37]. As gradient descent is performed on the error surface, depending on the initial starting conditions, it is possible for the network solution to become trapped in one of these local minima. Settling in a local minimum can be good or bad depending on how close the local minimum is to the global minimum and how low an error is required. The NN user should be cautioned that although a multilayer backpropagation network with enough neurons can implement just about any function, backpropagation does not always find the weights for the optimum solution. One might need to reinitialize the network and retrain several times, in order to reach the best solution. Fortunately, in this study, there is evidence that the minima of the performance function (cross-entropy or mse) found during each NN training cycle, were of a global type, rather than being local minima that would have brought significant variation in the estimated weights and biases, and in the final results in terms of sensitivity and specificity.
The partitioning of the data set (70% of the data randomly assigned to training, 15% of the data randomly assigned to validation, and 15% of the data randomly assigned to the test set) works well for larger data sets such as ours, and is widely used in ANN applications reported by other authors (with small deviations). Here, during training, the cross-entropy error reached its minimum after just three iterations, which means that there were enough data in the training set to achieve good performance without underfitting, i.e. no need to dedicate more data to the training set. The error started to increase at iteration number 26, as a sign of overfitting starting to occur, which is when the training was automatically stopped. This is a sound training behavior, and there was no apparent reason to reallocate the data. Moreover, during testing, all three data subsets reached their minimum simultaneously (Fig. 3). As mentioned above, if the error in the test set reaches a minimum at a significantly different iteration number than the validation set error, this might indicate a poor division of the data set [43]. Since this did not happen, there was no reason to consider data reallocation. The test subset, as a set of examples used to assess the performance of the fully-trained classifier, did not differ significantly in performance from the validation subset, which was another indication of proper division. It should also me mentioned that the test set is important when comparing different models (in terms of number of hidden layers, number of neurons in a layer, etc.). For this reason, it should not be chosen to be smaller than the 15% used here.
The issue of overfitting was addressed in the Methods section, under Generalization. Many other methods have lately been proposed to improve generalization. For example, in [44], the authors employed a technique called "dropout", to address this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, samples from an exponential number of different "thinned" networks are dropped out. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods.
Advantages and disadvantages of using ANN versus statistical methods
The artificial neural networks have evolved as an alternative to classical statistical methods for classification, such as discriminant analysis, or for developing predictive models for dichotomous outcomes in medicine. They offer many advantages, including less formal statistical training and the ability to implicitly detect complex non-linear relationships. Disadvantages include proneness to overfitting and the empirical nature of model development [45]. It is unlikely that one of the above methods will be the technique of choice in all circumstances. The choice as to which technique should be used depends to a large extent on the nature of a particular data set. For example, logistic regression and discriminant analysis are believed to be the more appropriate choice when the primary goal of model development is to look for possible causal relationships between the independent (input) and dependent (output) variables, and the modeler wishes to easily understand the effect of predictor variables on the outcome. Neural networks appear to be particularly useful when the primary goal is outcome prediction or pattern recognition/classification, and important interactions or complex nonlinearities exist in the data set [45].
The transparency of the decision making process has always been an issue in diagnostic decision making. Undoubtedly, it would be advantageous to be able to trace the logical flow at every step of the way, as was done with the "expert systems" of the 80s. Yet, they did not become a major trend. In a multifactor diagnostic system, it is increasingly difficult to follow the contribution of each and every factor. Which is how factor analysis, logistic regression, discriminant analysis, principal component analysis and of course neural network emerged. In fact, in [18], we used linear discriminant functions for decision making. Their use is relatively simple during the decision making, but derivation based on the training set is not simpler than the backpropagation method used here for ANN. And just as it may be uncertain to go beyond linear discriminant functions (e.g. use higher order discriminant functions such as quadratic functions) potentially causing overfitting in discriminant analysis, it is risky to "overtrain" an ANN. Yet, there is one difference: it is usually simpler to modify the structure of the ANN (oftentimes empirically), than that of the discriminant function model.
Last but not least, in a way, one can reduce an ANN to a logical structure which is identical to a linear discriminant function, in our case, use just one of the four neurons in the hidden layer, and use linear functions for f1 and f2, instead of log-sigmoid functions (logsig), as in this work (vector LW will, of course, turn into a scalar). In fact, it is the increased complexity that brings about the potentially improved functionality of ANNs.
This study confirmed that spectral powers at several signal frequencies obtained with retinal birefringence scanning around the human fovea can be used to detect central fixation reliably. Artificial neural networks can be trained to deliver very high diagnostic precision which is at least as good as statistical methods. In our case, ANN precision turned out to be even slightly better than the precision achieved with all discriminant analysis based methods, albeit with a relatively small size of the training set. It will take a larger training set to prove definite improvement. Although the ANN method was applied to one specific optical instrument design (spinning wave plate), there is enough evidence that neural-networks-based classifiers will work with other optical designs, producing other frequencies and combinations thereof. Regardless of the relatively small initial sample size, we believe that the PVS instrument design, the analysis methods employed, and the device as a whole, will prove valuable for mass screening of children. The instrument robustly identifies eye misalignment, which is a major risk factor for amblyopia, and the addition of a neural network based diagnostic feature will undoubtedly improve its performance.
BP:
backpropagation method for training the neural network
CAL:
CF:
central fixation
FNN:
feedforward neural network
HWP:
half wave plate
FN:
false-negative
FP:
IRB:
IW:
input weight matrix
LM:
Levenberg–Marquardt training algorithm
LW:
output weights matrix/vector
mse:
mean squared-error
NN:
neural network(s)
para-CF:
paracentral fixation (off-central fixation)
PVS:
pediatric vision screener
PhSS:
phase-shift-subtraction
RBS:
retinal birefringence scanning, retinal birefringence scanner
SBJ:
SPC:
TN:
true negative
TP:
true positive
WP:
wave plate
Simons K. Amblyopia characterization, treatment, and prophylaxis. Surv Ophthalmol. 2005;50(2):123–66.
Miller JM, Lessin HR. Instrument-based pediatric vision screening policy statement. Pediatrics. 2012;130(5):983–6.
Hunter DG, Patel SN, Guyton DL. Automated detection of foveal fixation by use of retinal birefringence scanning. Appl Optics. 1999;38(7):1273–9.
Hunter DG, Sandruck JC, Sau S, Patel SN, Guyton DL. Mathematical modeling of retinal birefringence scanning. J Opt Soc Am A. 1999;16(9):2103–11.
Guyton DL, Hunter DG, Patel SN, Sandruck JC, Fry RL. Eye fixation monitor and tracker. US Patent No 6,027,216; 2000.
Hunter DG, Nassif DS, Walters BC, Gramatikov BI, Guyton DL. Simultaneous detection of ocular focus and alignment using the pediatric vision screener. Invest Ophth Vis Sci. 2003;44:U657–U657.
Hunter DG, Nassif DS, Piskun NV, Winsor R, Gramatikov BI, Guyton DL. Pediatric vision screener 1: instrument design and operation. J Biomed Opt. 2004;9(6):1363–8.
Nassif DS, Piskun NV, Gramatikov BI, Guyton DL, Hunter DG. Pediatric Vision Screener 2: pilot study in adults. J Biomed Opt. 2004;9(6):1369–74.
Nassif DS, Piskun NV, Hunter DG. The pediatric vision screener III: detection of strabismus in children. Arch Ophthalmol. 2006;124(4):509–13.
Loudon SE, Rook CA, Nassif DS, Piskun NV, Hunter DG. Rapid, high-accuracy detection of strabismus and amblyopia using the pediatric vision scanner. Invest Ophthalmol Vis Sci. 2011;52(8):5043–8.
Gramatikov BI, Zalloum OH, Wu YK, Hunter DG, Guyton DL. Birefringence-based eye fixation monitor with no moving parts. J Biomed Opt. 2006;11(3):34025.
Gramatikov BI, Zalloum OH, Wu YK, Hunter DG, Guyton DL. Directional eye fixation sensor using birefringence-based foveal detection. Appl Opt. 2007;46(10):1809–18.
Gramatikov B, Irsch K, Mullenbroich M, Frindt N, Qu Y, Gutmark R, Wu YK, Guyton D. A device for continuous monitoring of true central fixation based on foveal birefringence. Ann Biomed Eng. 2013;41(9):1968–78.
Agopov M, Gramatikov BI, Wu YK, Irsch K, Guyton DL. Use of retinal nerve fiber layer birefringence as an addition to absorption in retinal scanning for biometric purposes. Appl Opt. 2008;47(8):1048–53.
Irsch K, Gramatikov B, Wu YK, Guyton D. Modeling and minimizing interference from corneal birefringence in retinal birefringence scanning for foveal fixation detection. Biomed Opt Express. 2011;2(7):1955–68.
Irsch K, Gramatikov BI, Wu YK, Guyton DL. New pediatric vision screener employing polarization-modulated, retinal-birefringence-scanning-based strabismus detection and bull's eye focus detection with an improved target system: opto-mechanical design and operation. J Biomed Opt. 2014;19(6):067004.
Irsch K, Gramatikov BI, Wu YK, Guyton DL. Improved eye-fixation detection using polarization-modulated retinal birefringence scanning, immune to corneal birefringence. Opt Express. 2014;22(7):7972–88.
Gramatikov BI, Irsch K, Wu YK, Guyton DL. New pediatric vision screener, part II: electronics, software, signal processing and validation. Biomedical engineering online. 2016;15(1):15.
Baxt WG. Use of an artificial neural network for the diagnosis of myocardial infarction. Ann Intern Med. 1991;115(11):843–8.
Baxt WG. Application of artificial neural networks to clinical medicine. Lancet. 1995;346(8983):1135–8.
Gardner GG, Keating D, Williamson TH, Elliott AT. Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool. Br J Ophthalmol. 1996;80(11):940–4.
Al-Shayea QK. Artificial neural networks in medical diagnosis. IJCSI Int J Comput Sci. 2011;8(2):150–4.
Amato F, Lopez A, Pena-Mendez EM, Vanhara P, Hampl A, Havel J. Artificial neural networks in medical diagnosis. J Appl Biomed. 2013;11(2):47–58.
Wolfe B, Eichmann D. A neural network approach to tracking eye position. Int J Hum Comput Interact. 1997;9(1):59–79.
Piratla NM, Jayasumana AP. A neural network based real-time gaze tracker. J Netw Comput Appl. 2002;25(3):179–96.
Baluja S, Pomerleau D. Non-intrusive gaze tracking using artificial neural networks. Adv Neural Inf Process Syst. 2003;6:753–60.
Demjen E, Abosi V, Tomori Z. Eye tracking using artificial neural networks for human computer interaction. Physiol Res. 2011;60(5):841–4.
Ferhat O, Vilarino F. Low cost eye tracking: the current panorama. Comput Intel Neurosc. 2016;2016:1–14.
McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity (reprinted from Bull Math Biophys 5:115–133, 1943). B Math Biol. 1990;52(1–2):99–115.
Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323(6088):533–6.
Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. Parallel distributed processing: explorations in the microstructures of cognition. Cambridge: MIT Press; 1986.
Cross SS, Harrison RF, Kennedy RL. Introduction to neural networks. Lancet. 1995;346(8982):1075–9.
Hagan MT, Demuth HB, Beale MH. Neural network design. Boston: PWS Publishing; 1996.
De Jesus O, Hagan MT. Backpropagation algorithms for a broad class of dynamic networks. IEEE Trans Neural Netw. 2007;18(1):14–27.
Blackwell WJ, Chen FW. Neural networks in atmospheric remote sensing. Norwood: Artech House; 2009.
Bai YP, Zhang HX, Hao YL. The performance of the backpropagation algorithm with varying slope of the activation function. Chaos Soliton Fract. 2009;40(1):69–77.
Article MATH Google Scholar
Beale MH, Hagan MT, Demuth HB. Neural Networks Toolbox. User's Guide for MATLAB R2012b. Natrick: The MathWorks; 2012.
Bishop CM. Neural networks for pattern recognition. Oxford: Oxford University Press; 1995.
Moller MF. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1993;6:525–33.
Hagan MT, Menhaj MB. Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Netw. 1994;5(6):989–93.
Why you should use cross-entropy error instead of classification error or mean squared error for neural network classifier training. https://jamesmccaffrey.wordpress.com/2013/11/05/why-you-should-use-cross-entropy-error-instead-of-classification-error-or-mean-squared-error-for-neural-network-classifier-training/.
Werbos PJ. The roots of backpropagation. New York: Wiley; 1994.
Improve neural network generalization and avoid overfitting. https://www.mathworks.com/help/nnet/ug/improve-neural-network-generalization-and-avoid-overfitting.html.
Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15:1929–58.
Tu JV. Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. J Clin Epidemiol. 1996;49(11):1225–31.
BG developed, trained and tested the artificial neural network, based on data that were collected previously with the Pediatric Vision Screener. The data collection, the validation, and the results of the analysis with statistical methods were reported previously in [18]. All authors read and approved the final manuscript.
The author would like to acknowledge the contribution to the PVS project by Drs. David Guyton, Kristi Irsch and Yi-Kai Wu, as described in [18].
BG holds U.S. Patent No. 8,678,592 B2 ("Method and Apparatus for detecting fixation of at least one eye of a subject on a target" 2014) covering aspects of retinal birefringence scanning and detection of short-lasting moments of central fixation. The patent was assigned to The Johns Hopkins University.
The datasets analysed and generated during this study are not publicly available due to the existence of several pending patent applications by The Johns Hopkins University related to the Pediatric Vision Screener, but could be made available for purely research purposes from the author on reasonable request.
Not applicable. The manuscript does not contain data from any individual person.
The study protocol and the patient consent forms had been approved by the Office of Human Subjects Research, Institutional Review Boards, at The Johns Hopkins University (Study name "Retinal Birefringence Scanning of the Eye", Study number NA_00050844/CR00006962, Committee IRB-3, Committee Chair: Richard Moore, Date of approval: August 4, 2015).
This work, as well as the development of the Pediatric Vision Screener, was supported by an Individual Biomedical Research Award from The Hartwell Foundation, gifts from Robert and Maureen Feduniak, Dewey and Janet Gargiulo, David and Helen Leighton, Richard and Victoria Baks, Robert and Diane Levy, by Research to Prevent Blindness, and by the Knights Templar Eye Foundation.
Laboratory of Ophthalmic Instrument Development, The Krieger Children's Eye Center at the Wilmer Institute, Wilmer Eye Institute, 233, The Johns Hopkins University School of Medicine, 600 N. Wolfe Street, Baltimore, MD, 21287-9028, USA
Boris I. Gramatikov
Correspondence to Boris I. Gramatikov.
Gramatikov, B.I. Detecting central fixation by means of artificial neural networks in a pediatric vision screener using retinal birefringence scanning. BioMed Eng OnLine 16, 52 (2017). https://doi.org/10.1186/s12938-017-0339-6
Vision screener
Fixation detection
Birefringence | CommonCrawl |
JMP #58, 4: path integrals and friends
By Gianluigi Filippelli on Monday, June 26, 2017
The path integral formulation of quantum mechanics replaces the single, classical trajectory of a system with the sum over an infinity of quantum possible trajectories. To compute this sum is used a functional integral. The most famous interpretation is dued by Richard Feynman. In an Euclidean spacetime we speak about Euclidean path integral:
Bernardo, R. C. S., & Esguerra, J. P. H. (2017). Euclidean path integral formalism in deformed space with minimum measurable length. Journal of Mathematical Physics, 58(4), 042103. doi:10.1063/1.4979797
We study time-evolution at the quantum level by developing the Euclidean path-integral approach for the general case where there exists a minimum measurable length. We derive an expression for the momentum-space propagator which turns out to be consistent with recently developed $\beta$-canonical transformation. We also construct the propagator for maximal localization which corresponds to the amplitude that a state which is maximally localized at location $\xi'$ propagates to a state which is maximally localized at location $\xi"$ in a given time. Our expression for the momentum-space propagator and the propagator for maximal localization is valid for any form of time-independent Hamiltonian. The nonrelativistic free particle, particle in a linear potential, and the harmonic oscillator are discussed as examples.
Other papers from JMP #58, 4, follows:
Okołów, A. (2017). Kinematic projective quantum states for loop quantum gravity coupled to tensor fields. Journal of Mathematical Physics, 58(4), 042302. doi:10.1063/1.4980014 (sci-hub)
We present a construction of kinematic quantum states for theories of tensor fields of an arbitrary sort. The construction is based on projective techniques by Kijowski. Applying projective quantum states for Loop Quantum Gravity (LQG) obtained by Lanéry and Thiemann we construct quantum states for LQG coupled to tensor fields.
Almeida, C. R., Batista, A. B., Fabris, J. C., & Moniz, P. V. (2017). Quantum cosmology of scalar-tensor theories and self-adjointness. Journal of Mathematical Physics, 58(4), 042301. doi:10.1063/1.4979537 (arXiv)
In this paper, the problem of the self-adjointness for the case of a quantum mini-superspace Hamiltonian retrieved from a Brans-Dicke action is investigated. Our matter content is presented in terms of a perfect fluid, onto which Schutz's formalism will be applied. We use the von Neumann theorem and the similarity with the Laplacian operator in one of the variables to determine the cases where the Hamiltonian is self-adjoint and if it admits self-adjoint extensions. For the latter, we study which extension is physically more suitable.
Dudnikova, T. V. (2017). On convergence to equilibrium for one-dimensional chain of harmonic oscillators on the half-line. Journal of Mathematical Physics, 58(4), 043301. doi:10.1063/1.4979629 (arXiv)
The mixing boundary-value problem for infinite one-dimensional chain of harmonic oscillators on the half-line is considered. The large time asymptotic behavior of solutions is obtained. The initial data of the system are supposed to be a random function which has some mixing properties.
Li, Q., & Wu, X. (2017). Existence, multiplicity, and concentration of solutions for generalized quasilinear Schrödinger equations with critical growth. Journal of Mathematical Physics, 58(4), 041501. doi:10.1063/1.4982035 (sci-hub)
In this paper, we study the following generalized quasilinear Schrödinger equations (...). Under some suitable conditions, we study the existence, multiplicity, and concentration of solutions by the variational methods.
Exner, P., & Lipovský, J. (2017). Pseudo-orbit approach to trajectories of resonances in quantum graphs with general vertex coupling: Fermi rule and high-energy asymptotics. Journal of Mathematical Physics, 58(4), 042101. doi:10.1063/1.4979048 (sci-hub)
The aim of the paper is to investigate resonances in quantum graphs with a general self-adjoint coupling in the vertices and their trajectories with respect to varying edge lengths. We derive formulae determining the Taylor expansion of the resonance pole position up to the second order, which represent, in particular, a counterpart to the Fermi rule derived recently by Lee and Zworski for graphs with the standard coupling. Furthermore, we discuss the asymptotic behavior of the resonances in the high-energy regime in the situation where the leads are attached through $\delta$ or $\delta'_s$ conditions, and we prove that in the case of $\delta'_s$ coupling the resonances approach to the real axis with the increasing real parts as $\mathscr{O} \left ( (\text{Re} \, k)^{−2} \right )$.
Labels: abstract, journal of mathematical physics, mathematics, path integrals, physics, quantum mechanics, richard feynman
Wigner's theorem
The quantum Zeno paradox
Adiabatic pendulum
JMP 58, 3: math paradoxes in quantum mechanics
There's no two without three | CommonCrawl |
\begin{definition}[Definition:Plane Geometry]
'''Plane geometry''' is the study of geometric figures in two dimensions.
\end{definition} | ProofWiki |
\begin{document}
\input gtoutput \volumenumber{2}\papernumber{3}\volumeyear{1998} \pagenumbers{31}{64}\published{21 March 1998} \proposed{Cameron Gordon}\seconded{Joan Birman, Walter Neumann} \received{4 August 1997} \accepted{19 March 1998}
\title{A natural framing of knots}
\author{Michael T Greene\\Bert Wiest}
\address{Mathematics Institute\\University of Warwick\\Coventry CV4 7AL, UK\\kern.2em
\\\rm Email:\stdspace\tt [email protected]\stdspace{\rm or}\stdspace [email protected]\\[email protected]} \asciiaddress{Mathematics Institute, University of Warwick, Coventry CV4 7AL, UK. Email: [email protected], [email protected], [email protected]}
\begin{abstract} Given a knot $K$ in the 3--sphere, consider a singular disk bounded by $K$ and the intersections of $K$ with the interior of the disk. The absolute number of intersections, minimised over all choices of singular disk with a given algebraic number of intersections, defines the {\sl framing function} of the knot. We show that the framing function is symmetric except at a finite number of points. The symmetry axis is a new knot invariant, called the {\sl natural framing} of the knot. We calculate the natural framing of torus knots and some other knots, and discuss some of its properties and its relations to the signature and other well-known knot invariants. \end{abstract} \asciiabstract{ Given a knot K in the 3-sphere, consider a singular disk bounded by K and the intersections of K with the interior of the disk. The absolute number of intersections, minimised over all choices of singular disk with a given algebraic number of intersections, defines the framing function of the knot. We show that the framing function is symmetric except at a finite number of points. The symmetry axis is a new knot invariant, called the natural framing of the knot. We calculate the natural framing of torus knots and some other knots, and discuss some of its properties and its relations to the signature and other well-known knot invariants.}
\keywords{Knot, link, knot invariant, framing, natural framing, torus knot, Cayley graph}
\primaryclass{57M25}\secondaryclass{20F05}
\maketitlepage
Let $K\co S^1 \to S^3$ be an unoriented knot. Let $D$ be the 2--disk. We define a {\sl compressing disk of} $K$ to be a map $f\co D\to S^3$
such that $f|_{\partial D}=K$ and such that $f|_{{\rm int}(D)}$ is transverse to $K$.
Then $f|_{{\rm int}(D)}$ has only finitely many intersections with the knot.
We call the intersection points the {\sl holes} of the compressing disk, and denote their number by $n(f)$. So $n(f)=|\{ f^{-1}(K) \cap {\rm int}(D)\}|$. One rather crude invariant of the knot $K$ is the {\sl knottedness}
$$L(K):= \min\{n(f)\kern.2em|\ f \mbox{ a compressing disk}\},$$ which was first considered in \cite{Pannwitz}.
(The English term `knottedness' was taken from \cite{BuZi}.)
We note that holes can occur with two different signs (depending on the direction in which $K$ pierces $f(D)$), so we can refine the above invariant
by defining the {\sl framing function} $n_K\co {\mathbb Z} \to {\mathbb N}$ as follows: for a given $k\in {\mathbb Z}$ we minimize the absolute number of holes among all compressing disks with algebraically $k$ holes.
We shall see that (except at finitely many points in ${\mathbb Z}$) this function is symmetric around some value of $k$, and we call this `asymptotic symmetry axis' $k$ the {\sl natural framing} of $K$. The aim of this paper is to determine the natural framing of certain classes of knot, and to study its properties and relations with other knot invariants.
In section 1 we define the natural framing of knots and show that it shares many properties with the signature. In section 2 we define the natural framing of each component of a link, and calculate it for a number of links. It seems that for most ``simple'' knots the natural framing number is even; in section 3, however, we show that knots with odd natural framing do exist. In section 4 we prove that the natural framing of the $(p,q)$-torus knot is $-(p-1)(q-1)$. In section 5 we summarize all the information we have about the natural framing of knots with up to seven crossings.
\section{Definitions and general results}
Let $K\co S^1 \to S^3$ be a knot, and let $f\co D \to S^3$ be a compressing disk. By a standard general position argument we can assume that the singularities of a compressing disk $f$ are all transverse self-intersectons which may end either in $K$ or in the branch points of Whitney umbrellas (for details see \cite{umbrellas}). In what follows, we will, without further explanation, talk about the double lines or the Whitney umbrellas of a compressing disk. The following result is from \cite{Pannwitz}.
\begin{lem}\label{noWU} \sl Let $f\co D \to S^3$ be a compressing disk. Then there exists a compressing disk $f'\co D \to S^3$, where $f'$ is an immersion, such that $n(f')\leqslant n(f)$. \end{lem}
\begin{proof} We homotope $f$ into general position. Now we only have to get rid of Whitney umbrellas. The double line starting at a Whitney umbrellas either ends in another Whitney umbrella or in $K$. We move the branch point along the double line, shrinking the double line until it doesn't contain triple points. In the first case we then perform a surgery along the double line; this eliminates the double line and the two Whitney umbrellas. In the second case we slide the knot over the branch point of the Whitney umbrella; this leaves the knot type unchanged, and eliminates the Whitney umbrella, the double line, and one hole. (Also, it changes the framing represented by the compressing disk by $\pm 1$.) Applying this process to every branch point gives the result. \end{proof}
\begin{prop}[Pannwitz]For any knot $K$, $L(K)$ is even. \end{prop} \begin{proof} Let $f\co D\to S^3$ be a compressing disk with $n(f)=L(K)$. As we have just seen, we may assume that $f$ is an immersion, so it has no Whitney umbrellas. Every hole of $f$ is the beginning of a double point line. Now this line must end in another hole (not a Whitney umbrella); so there must be an even number of holes.\end{proof}
{\bf Remark}\stdspace For any non-trivial knot $K$ we have $L(K)\geqslant 2$. This follows from Dehn's Lemma and the fact that $L(K)$ is even.
\begin{theorem}\label{Ladditive}The knottedness is additive under connected sum, ie if $K_1$ and $K_2$ are knots then $L(K_1\#K_2)=L(K_1)+L(K_2)$. \end{theorem}
\begin{proof} It is obvious that $L(K_1\#K_2) \leqslant L(K_1)+L(K_2)$; we need to prove the opposite inequality.
Choose an embedded sphere $S$ in $S^3$ which intersects the knot in only two points, splitting $K_1$ from $K_2$. Let $f\co D \to S^3$ be a compressing disk of $K_1\#K_2$ with only $L(K_1\#K_2)$ holes. Make the map $f$ transverse to $S$ without changing the number of holes. $f^{-1}(S)$ consists of one arc $A$ in $D$ connecting two points on $\partial D$, and a number of disjoint circles in $D$. Choose an outermost one of those, and call it $c$. Let $C$ be the disk bounded by $c$. Say $c$ represents an element $[k]\in H_1(S\backslash(S \cap K)) \cong H_1(S^3\backslash(K_1\#K_2)) \cong {{\mathbb Z}}$; then there are at least $k$ holes inside $C$.
$S\backslash(S \cap K)$ is a sphere with two holes, so we can replace $f|_C$ by a map whose image lies entirely in $S$, and which has precisely $k$ intersections with $K$. After pushing the image away from $S$ by a small homotopy, we have a map $f'$ such that $f'|_C$ has precisely $k$ holes and maps no point of $C$, or a small neighbourhood of $C$, to $S$.
This construction has replaced the compressing disk $f$ by a compressing disk $f'$ with at most as many holes (and the same framing). Since by hypothesis $n(f)$ is minimal, we have $n(f')=n(f)$. Applying the construction to all outermost circles of intersection yields a compressing disk with $L(K_1\#K_2)$ holes, which has the property that the arc $A$ is mapped to $S$, one of the two components of $D\backslash A$ is entirely mapped to one of the two components of $S^3\backslash S$, and the other component of $D\backslash A$ is mapped to the other component of $S^3\backslash S$. This gives rise to compressing disks for the knots $K_1$ and $K_2$ with $n_1$ and $n_2$ holes respectively such that $n_1+n_2=L(K_1\#K_2)$.\end{proof}
Let $d$ be the inner boundary of a small neighbourhood of $\partial D$ in $D$; then $d$ is a curve in $D$ `close to $\partial D$'. $f$ maps $d$ to a longitude of the knot. If we orient $d$ and $\partial D$ in the same way, then $f(d)$ and $f(\partial D)=K$ have a linking number $[f(d)]\in H_1(S^3\backslash K)\cong {\mathbb Z}$. We denote this number by $k(f)$. So $d$ represents the {\sl framing} $k(f)$ of $K$. Geometrically, this framing can be obtained as follows: choose an orientation of $D$; this determines an orientation of $\partial D$ and hence of $K$. For $i\in \{1,\ldots,n(f)\}$ let $x_i\in D$ be the $i$th hole of $f$. Let $\sigma(i)=1$ if a positive basis of $T_{f(x_i)}f(D)$ followed by a positive tangent vector of $K$ at $f(x_i)$ forms a positive basis of $T_{f(x_i)}S^3$, and let $\sigma(i)=-1$ otherwise. Then $k(f)=\sum_{i=1}^{n(f)}\sigma(i)$. \ppar
To every knot $K$ we can associate a function $n_K\co {{\mathbb Z}} \to {\mathbb N}$ which we call the {\sl framing function} as follows:
$$ n_K(k'):=\min\{n(f)\kern.2em|\ f \mbox{ a compressing disk with }
k(f)=k'\} $$ Notice that $L(K)=\min n_K$.
\begin{prop}\label{nproperties} \sl For any knot $K$, the function $n_K$ has the following properties. \begin{itemize}
\item[{\rm (i)}] $n_K(k) \geqslant |k|$ for all $k\in {\mathbb Z}.$ \item[{\rm (ii)}] $n_K$ maps even numbers to even numbers and odd numbers to odd numbers. \item[{\rm (iii)}] `Continuity': For any $k\in {\mathbb Z}$ we have $n_K(k+1)=n_K(k) \pm 1$. \item[{\rm (iv)}] If $k\in {\mathbb Z}$ is odd, then $n_K(k)= \min\{n_K(k-1),n_K(k+1)\}+1$. In particular, the function $n_K$ is completely determined by its values on even numbers. \end{itemize} \end{prop}
\proof (i) is obvious. \begin{itemize} \item[(ii)] follows from the fact that $\sum_{i=1}^n \sigma(i)$ (where $\sigma(i)\in \{-1,1\}$) is even if and only if $n$ is even. \item[(iii)] By (ii), $n_K(k+1)$\nearrow$ n_K(k)$. So for definiteness say $n_K(k+1) > n_K(k)$. Let $f$ be a compressing map with $n_K(k)$ holes and framing $k$. By artificially introducing a Whitney umbrella we can obtain a compressing disk with $n_K(k)+1$ holes and framing $k+1$. \item[(iv)] Suppose $f$ is a compressing disk with odd framing number. Then $f$ has at least one Whitney umbrella. If we remove this as in Lemma 1.1, we obtain another compressing disk with framing number $k(f) \pm 1$ and $n(f)-1$ holes. It follows that for every odd $z\in {\mathbb Z}$ we have either $n_K(z+1)<n_K(z)$ or $n_K(z-1)<n_K(z)$. This implies (iv).\endproof \end{itemize}
Let $B$ in $S^3$ be a double-point line of a compressing disk $f$. Suppose that $B$ does not end in a Whitney umbrella, and suppose $B$ is not a closed curve. Then $f^{-1}(B)$ consists of two lines in $D$ (not necessarily disjoint or embedded). There are two possible cases: either one of them connects two holes and the other connects two points on $\partial D$, when $B$ is a {\sl ribbon singularity}; or each of the two lines connects one hole with one point on $\partial D$, when $B$ is a {\sl clasp singularity}.
We call a clasp singularity {\sl positive} or {\sl negative} if it ends at two positive or negative holes, respectively.
We say a clasp or ribbon singularity is {\sl short} if its double-point line meets no triple points.
{\bf Conjecture}\stdspace For any nontrivial knot $K$ we have $n_K(0) \geqslant 4$.
This is a strengthened form of Dehn's Lemma. Dehn's Lemma states that the only knot in $S^3$ which has a compressing disk without holes is the unknot. Our conjecture asserts that this remains true under the weakened hypothesis that the compressing disk has 2 intersections with $K$ of opposite sign (ie one ribbon singularity $B$). This is easy to show with the extra hypothesis that $B$ is short.
\begin{definition}\sl The function $n_K$ gives rise to a framing of the knot: $$ \nu(K):= \lim_{k\to \infty} {n_K(-k)-n_K(k) \over 2} $$ is called the {\rm natural framing} of $K$. Alternatively, we can define $\nu(K)$ to be the unique integer such that there exists an $N\in {\mathbb N}$ with $n_K(\nu(K)-k)=n_K(\nu(K)+k)$ for all $k \geqslant N$.\end{definition}
\begin{lem}\sl This is well-defined.\end{lem}
\begin{proof} For $k\in {\mathbb N}$ let $a_k=n_K(-k)-|{-}k|$ and let $b_k=n_K(k)-|k|$.
We have $(n_K(-k)-n_K(k))/2=(a_k-b_k)/ 2$. Moreover, by Proposition \ref{nproperties} (i) and (iii), both $a_k$ and $b_k$ are decreasing sequences in $2{\mathbb N}$. Therefore there exists an $N\in {\mathbb N}$ such that for $k\geqslant N$ the sequences $(a_k)$ and $(b_k)$ are constant. It follows that the limit exists. Furthermore, by \ref{nproperties} (ii) we have that $n_K(-k)-n_K(k)$ is even for all $k\in {\mathbb N}$, so $\nu(K)$ is indeed an integer.\end{proof}
The natural framing is an `asymptotic symmetry axis' of the framing function. As a first example, we look at the framing function of the figure-of-eight knot $4_1$ (from the table in \cite{Rolfsen}). Only the value $n_{4_1}(0)=4$
\begin{figure}
\caption{The function $n_{4_1}$}
\end{figure}
is conjectured. The fact that $n_{4_1}(2)=2$ follows from an easy construction (a compressing disk with one positive clasp and no other singularities). Similarly we see that $n_{4_1}(-2)=2$. The rest of the proof follows immediately from Proposition \ref{nproperties}.
We observe that $\nu(4_1)=0$. More generally we have:
\begin{prop}\label{nmirror} \sl Let $K$ be a knot, and let $mK$ be its mirror image. Then $n_K(k)=n_{mK}(-k)$ for all $k\in {\mathbb Z}$. In particular, $\nu(K)=-\nu(mK)$, and if $K$ is amphicheiral then
$\nu(K)=0$. \end{prop}
\begin{proof} Let $f$ be a compressing disk with framing $k$ and with $n_K(k)$ holes. Then $m\circ f$, where $m$ is the mirror map, is a compressing map of the knot $mK$ with framing $-k$ and also $n_K(k)$ holes. It follows that $n_{mK}(-k)\leqslant n_K(k)$. The opposite inequality is proved in the same way.\end{proof}
So the natural framing $\nu(K)$ changes sign under taking the mirror image, a property it shares with the well-known signature $\sigma(K)$ (see, for example, \cite{Rolfsen}). There is an even closer relation:
\begin{prop}\sl If a knot $K$ has a compressing disk with positive clasps and closed double-point lines, but no negative clasps or ribbons, then $\nu(K)\geqslant0$. If all the clasps are short, then we have in addition $\sigma(K)\leqslant0$.\end{prop}
\begin{proof} If there exists a compressing disk with say $c$ positive clasps and no negative clasps or ribbons then we have $n_K(k)=k$ for all $k\geqslant 2c$. It follows that $\nu(K)\geqslant0$. If in addition all these clasps are short, then we can unknot $K$ by $c$ negative crossing changes. According to \cite{Giller} it follows that $-2c\leqslant\sigma(K)\leqslant0$.\end{proof}
As an application, we can prove that for a large family of knots the natural framing and the signature are both zero. We only have to construct a compressing disk with only short positive clasps and no other singularities, and another compressing disk with only short negative clasps.
{\bf Examples}\stdspace(1)\stdspace If $K$ is one of the so-called `twist knots' $4_1, 6_1, 8_1, 10_1$ etc then $\nu(K)=\sigma(K)=0$. The case of the stevedore's knot $6_1$ is illustrated in Figure \ref{mtg61}, and
\begin{figure}
\caption{Two different compressing disks of the knot $6_1$}
\label{mtg61}
\end{figure}
the other cases are similar. On the left we see a compressing disk with one short clasp singularity with positive sign (indicated by the dashed lines), proving that $n_{K_4}(2)\leqslant 2$. On the right we see a compressing disk with two short clasp singularities of negative sign, proving that $n_{K_4}(-4)\leqslant 4$.
(2)\stdspace This construction can be generalized. Consider the family of $r$--bridge knots indicated in Figure \ref{mtgplat}, where the $a_j^{(i)}$ and $b_j^{(i)}$ are all non-negative and even. On the right, choosing a particular 12--crossing knot as an example, we see two compressing disks; the first has only short positive clasps, the second only short negative clasps.
\begin{figure}
\caption{A family of knots whose natural framing and signature are zero}
\label{mtgplat}
\end{figure}
{\bf Conjecture}\stdspace The natural framing of the twist knots $3_1$, $5_2$,
$7_2$, $9_2$, $\ldots$ is $2$. More precisely, we conjecture that for such knots $n(k)=2+|k-2|$. The values for $k>0$ are easy to prove; the values for $k\leqslant0$ seem hard.
Even more generally, we can consider the ${4ml+1\over 2l}$--two-bridge knot (in the notation of \cite{BuZi}, Chapter 12). This is the rational knot $C(2m \ 2l)$
in the notation of Conway \cite{Conway}, and for $l=1$ we get twist knots. We conjecture that for $m,l\in{\mathbb Z}^+$ the natural framing of this knot is $\min(2m,2l)$.
\begin{lem}\label{sumle}\sl Let $K_1$ and $K_2$ be knots. Then $$ n_{K_1\#K_2}(k)=\min_{k'\in{\mathbb Z}}\kern.04em\bigl(n_{K_1}(k')+ n_{K_2}(k-k')\bigr). $$ \end{lem}
\begin{proof}As in the proof of Theorem \ref{Ladditive} we see that for every compressing disk $f$ of $K_1\#K_2$ with framing $k$ and $n_{K_1\#K_2}(k)$ holes we can find another compressing disk $f'$ with the same framing, the same number of holes and only one intersection curve with a separating sphere. The result follows. \end{proof}
\begin{prop} \label{nyadditive} \sl The natural framing is additive under connected sum; ie if $K_1$ and $K_2$ are knots then $\nu(K_1\#K_2)=\nu(K_1)+\nu(K_2)$. \end{prop} \begin{proof}
There exist $N,M \in {\mathbb N}$ and $c_1, c_2 \in {\mathbb Z}$ such that $n_{K_1}(\nu(K_1)+k)=c_1+|k|$ and
$n_{K_2}(\nu(K_2)+l)=c_2+|l|$ for all $k,l \in {\mathbb Z}$ with
$|k|\geqslant N$ and $|l|\geqslant M$.
Now let $k\geqslant N$ and $l\geqslant M$. By Lemma \ref{sumle} we have
$$ \begin{array}{r@{}c@{}l} n_{K_1\#K_2}(\nu(K_1)+\nu(K_2)+k+l)&{}\leqslant{}&
n_{K_1}(\nu(K_1)+k)+n_{K_2}(\nu(K_2)+l)\\
&{}={}&c_1+k+c_2+l. \end{array} $$ Furthermore we have $n_{K_1}(\nu(K_1)+k+a)\geqslant c_1+k+a$ and $n_{K_2}(\nu(K_2)+l-a)\geqslant c_2+l-a$ for all $a\in {\mathbb Z}$. Lemma \ref{sumle} implies that $$ \begin{array}{r@{}c@{}l} n_{K_1\#K_2}(\nu(K_1)+\nu(K_2)+k+l) &{}={}&\min\limits_{a\in {\mathbb Z}}\kern.08em\Bigl(n_{K_1}(\nu(K_1)+k+a)
\\
&&\hspace{2cm} {}+n_{K_2}(\nu(K_2)+l-a)\Bigr)\\
&{}\geqslant{}&c_1+k+c_2+l. \end{array} $$ We have proved that
$n_{K_1\#K_2}(\nu(K_1)+\nu(K_2)+k)=c_1+c_2+|k|$ for all $k\in {\mathbb Z}$ with $k\geqslant N+M$. The case $k\leqslant -N-M$ is proved similarly.\end{proof}
\section{A natural framing of links}
In this section we define a natural framing for each component of a link. By giving an example we prove that these framing numbers are {\it not} always even, and that they are {\it not} determined by the natural framings of the individual link components (regarded as knots) and their linking numbers.
Let $L=L_1\cup\ldots\cup L_m\co S^1\cup\ldots\cup S^1 \to S^3$ be an unoriented link with $m$ components. Let $D$ be the 2--disk. We define a {\sl compressing disk of the $i$th link component} $L_i$ ($i\in\{1,\ldots,m\}$) to be a map
$f\co D\to S^3$ transverse to $L$ such such that $f|_{\partial D}=L_i$.
Then $f|_{{\rm int}(D)}$ has only finitely many intersections with $L$. We call these intersection points the {\sl holes} of the compressing disk, and denote their number by $n(f)$. We choose an orientation of $L_i$. This induces an orientation of $D$. We
look at an intersection point of $f|_{{\rm int}(D)}$ with $L_i$. We define such a hole to be {\sl positive} or {\sl negative}, depending on whether a positive basis of the tangent space to $D$ followed by a positive tangent vector to $L_i$ forms a positive or a negative basis of $S^3$, respectively. This is well-defined (ie independent of the choice of orientation of $L_i$). We denote by $k(f)\in{\mathbb Z}$ the number of intersections of
$f|_{{\rm int}(D)}$ with $L_i$ (not {\it all}\/ link components!), counted algebraically. Again, we can think of $k(f)$ as the framing of $L_i$ defined by $f$.
To every component $L_i$ of $L$ we can associate a function $n_i\co {{\mathbb Z}} \to {\mathbb N}$ which we call the {\sl $i$th framing function} as follows: $$
n_i(k'):=\min\{n(f)\kern.2em|\ f\hbox{ a compressing disk of }L_i
\hbox{ with }k(f)=k'\}. $$ Precisely as in the case of knots we define the {\sl natural framing of the component $L_i$ of $L$} by $$ \nu_i(L):=\lim_{k\to\infty}{n_i(-k)-n_i(k)\over2}, $$ and prove that this limit exists.
It is clear from the definition that the natural framing of each component of $L$ is an integer multiple of ${1\over2}$. We claim:
\begin{lem}\sl The natural framing of each component of $L$ is an integer.\end{lem}
\begin{proof} It suffices to show that for each $i\in\{1,\ldots,m\}$ we have either $n_i(k) \equiv k$ (mod $2$) for all $k\in {\mathbb Z}$ or $n_i(k)+1 \equiv k$ (mod $2$) for all $k\in {\mathbb Z}$. To see this, let $f$ be a compressing disk with $n(f)=n(k')$, where $k'=k(f)$. Equip all link components with an orientation, no matter which. Let $\tilde{k}(f)$ be the number of
intersections of $f|_{{\rm int}(D)}$ with {\it all} components of $L$, counted algebraically. We have $$ \tilde{k}(f) \equiv n(f) \mbox{ (mod } 2)$$ and $$ k(f)=\tilde{k}(f)-\sum_{j\neq i} lk(L_i,L_j).$$ It follows that $$ n(f)-k(f) \equiv \sum_{j\neq i} lk(L_i,L_j) \mbox{ (mod } 2).$$ Since $\sum_{j\neq i} lk(L_i,L_j)$ is independent of $f$, the result follows.\end{proof}
{\bf Examples}\stdspace (1)\stdspace The trivial link on $m$ components. Let $L$ be the link consisting of $m$ unknotted, unlinked components. It is easy to see
that the framing function of each component is $n_i(k)=|k|$. It follows that $\nu_i(L)=0$ for each $i\in \{1,\ldots,m\}$.
(2)\stdspace The Hopf link. The two link components $L_1$ and $L_2$ have linking
number 1, so any compressing disk of the first component has at least one intersection with the second. Therefore we have $n_1(k)\geqslant |k|+1$ for all $k\in {\mathbb Z}$. It is easy to construct compressing disks of $L_1$ which have framing $k$
and $|k|+1$ holes, so $n_1(k)=|k|+1$ for all $k\in {\mathbb Z}$. It follows that $\nu_1(L)=0$, and similarly $\nu_2(L)=0$.
(3)\stdspace The link with two components shown in Figure \ref{oddlink} for any $t\geqslant 1$, consisting of an unknot and a twist-knot (eg for $t=2$ we get the knot $6_1$).
\begin{figure}
\caption{Two different compressing disks of $L_2$}
\label{oddlink}
\end{figure}
\begin{prop} \sl We have $\nu_1(L)=0$ and $\nu_2(L)=1$ \end{prop}
\begin{proof} $L_1$, regarded only as a closed curve in $S^3\backslash L_2$, represents a nontrivial element of $\pi_1(S^3\backslash L_2)$. Therefore any compressing disk of $L_1$ has at least one intersection with $L_2$. Since the linking number of $L_1$ and $L_2$ is 0, there must in fact be a minimum of 2 holes. Therefore we have $n_1(k)\geqslant |k|+2$ for $k\in {\mathbb Z}$, and again equality follows by construction. It follows that $\nu_1(L)=0$.
In order to visualize compressing disks of $L_2$, we draw their lines of self-intersections, and also the lines of intersection with the obvious compressing disk of $L_1$ which has no self-intersections and two intersection points with $L_2$.
There exists a compressing disk of $L_2$, indicated in Figure \ref{oddlink}(a), which is disjoint from $L_1$, and has one clasp singularity connecting two positive holes. This proves that $n_2(k)=k$ for $k\geqslant 2$.
On the other hand, there exists a compressing disk, indicated in Figure \ref{oddlink}(b), which has two intersections of opposite sign with $L_1$, and $t$ clasp singularities, each connecting two negative holes. This proves
that $n_2(k)\leqslant |k|+2$ for $k\leqslant -2t$.
Next let $f$ be a compressing disk with $k(f)\leqslant 0$. The image of $f$ has an even number of intersections with $L_1$, because the linking number of $L_1$ with $L_2$ is $0$. We distinguish two cases. If $f$ is disjoint from $L_1$, ie if the image of $f$ is contained in the solid torus $S^3\backslash L_1$, then using a simple covering space argument it
is easy to prove that $f$ has at least two positive holes. So $n(f)\geqslant |k(f)|+4$ for all such compressing disks. If, however, the image of $f$ has $2s$ intersections with $L_1$
($s\geqslant 1$) then $n(f)\geqslant |k(f)|+2s$. In either case,
$n(f)\geqslant |k(f)|+2$. It follows that $n_2(k)\geqslant |k|+2$ for $k\leqslant 0$.
Altogether we have $n_2(k)=|k|+2$ for $k\leqslant -2t$, and therefore $\nu_2(L)=1$.\end{proof}
{\bf Example}\stdspace(4)\stdspace The Whitehead link, which is the case $t=0$ in Figure \ref{oddlink}. We have $\nu_2(L)=1$, by precisely the same argument as in the case $t>0$. The Whitehead link is isotopic to itself with the roles of $L_1$ and $L_2$ interchanged. Therefore $\nu_1(L)=1$. (Note that the reasoning behind the calculation of $\nu_1(L)$ for the case $t>0$ does not apply here, since the {\it path} $L_1$ {\it is} contractible in $S^3\backslash L_2$.) \ppar
These results are remarkable, because they show that $\nu_i(L)$, the natural framing of the $i$th link component, is not determined by the natural framing numbers of all individual link components and their linking numbers. More precisely, $\nu_i(L)$ is not determined by $\nu(L_1),\ldots,\nu(L_m)$ and $lk(L_r,L_s)$ ($r,s\in \{1,\ldots,m\}$).
Also, the natural framing numbers of link components can be odd. It is not obvious that {\it knots} with odd natural framings exist, but example (3) will lead to the construction of such a knot in the next section.
\begin{prop}\label{linkcons}\sl Let $L^{(1)}=L^{(1)}_1\cup\ldots\cup L^{(1)}_r$ and $L^{(2)}=L^{(2)}_1\cup\ldots\cup L^{(2)}_s$ be links in $S^3$ with $r$ and $s$ components, respectively. Let $L^{(3)}$ be the link obtained by embedding $L^{(1)}$ and $L^{(2)}$ on either side of some embedded $S^2$ in $S^3$, and connecting $L^{(1)}_1$ and $L^{(2)}_1$ by a `band', as in the construction of the connected sum of two knots. $L^{(3)}$ has $r+s-1$ components, which we label such that $L^{(3)}_1$ is the one that contains the `band'. Then $\nu_1(L^{(3)})=\nu_1(L^{(1)})+\nu_1(L^{(2)})$. \end{prop}
\begin{proof} The proof is virtually identical to the proof of \ref{nyadditive}. (Note that the link $L^{(3)}$ does not depend on the choice of the band.)\end{proof}
\begin{cor}\sl Let $L=L_1\cup\ldots\cup L_r$ be a link in $S^3$. Then we can add one unknotted link component to $L$ in such a way that the natural framing of $L_1$ is increased by $1$, and such that the natural framings of $L_2,\ldots,L_r$ remain unchanged.\end{cor}
\begin{proof} Both components of the Whitehead link are unknotted, so taking the connected sum of $L_1$ with either of its components doesn't change the type of $L$. The result now follows from Example (4) and Proposition \ref{linkcons}.\end{proof}
\section{A knot with odd natural framing}
For `simple' knots, eg knots with low crossing number, the natural framing always appears to be an even number. In this section we exhibit a knot $K$ with $\nu(K)=1$. This knot is a satellite of a connected sum of three knots. We do not know an atoroidal knot with odd natural framing number.
\begin{figure}
\caption{The knot $K$ with $\nu(K)=1$}
\label{oddkn}
\end{figure}
We need one more technical tool. Let $K$ be a knot, and let $f\co D \to S^3$ be any continuous map. We define a {\sl branched hole of} $f$ to be a point $p\in D$ such that $f(p)\in K$ and such that $f$ maps the boundary of a small disk containing $p$ to some power of a meridian of $K$. So branched holes may just be transverse intersections of $f(D)$ with $K$, but they may also be essentially nontransverse. If all intersections of $f(D)$ with $K$ are branched holes, then we say $f$ is {\sl branched transverse}.
Let $\rho_n\co S^1 \to S^1$ be the standard map of degree $n$. Then any branched transverse map $f\co D \to S^3$ with $f|_{\partial D}=K\circ \rho_n$ has at least one branched hole, by the loop theorem. More generally:
\begin{lem} \sl \label{branchedlem}
Let $K_1,\ldots,K_m$ be nontrivial knots. Then any branched transverse map $f\co D \to S^3$ with $f|_{\partial D}=(K_1\#\ldots\# K_m)\circ \rho_n$ has at least $m$ branched holes. \end{lem}
\begin{proof} The proof is similar to, but even simpler than, the proof of Theorem \ref{Ladditive}. For any given compressing disk $f$ there exists a compressing disk $f'$ with no more branched holes than $f$ and with only $m$ arcs (no closed curves) of intersection with a separating sphere. Then $f'$ can be split into two disks, and the lemma follows inductively. \end{proof}
We are now ready to prove the main result of this section. Let $K$ be the knot indicated in Figure \ref{oddkn}.
\begin{theorem}\label{mainth}The framing function of $K$ satisfies
$n(k)=|k|+2$ for $k\geqslant 2$ and $n(k)=|k|+4$ for $k\leqslant -4$. In particular, $\nu(K)=1$. \end{theorem}
\begin{proof}
It is easy to construct a compressing disk with $6$ negative and $2$ positive holes, so $n(k)\leqslant |k|+4$ for $k\leqslant -4$ (see Figure \ref{optimcd}(a)). It is also easy to construct a compressing disk with $1$ negative and $3$ positive holes (see Figure \ref{optimcd}(b)),
\begin{figure}
\caption{The two `optimal' compressing disks of $K$}
\label{optimcd}
\end{figure}
so $n(k)\leqslant |k|+2$ for $k\geqslant 2$. We want to prove that these are in fact equalities, ie that every compressing disk of $K$ has at least one negative and two positive holes.
Consider the knotted solid torus $S$ containing $K$, with a meridinal curve $c$ on its boundary, as indicated in Figure \ref{torusS}. The core of $S$ is a connected sum of three trefoil knots.
\begin{figure}
\caption{The solid torus $S$ containing $K$}
\label{torusS}
\end{figure}
Let $f\co D\to S^3$ be a compressing disk transverse to $\partial S$ with $n(f)=n(k(f))$, ie $f$ has the minimal possible number of holes for its framing. Two compressing disks $f_0$ and $f_1$ are called {\sl isotopic} if there is a homotopy $f_t$ ($t\in [0,1]$) which is fixed on $\partial D$ $$ f_t \co (D,D-f_0^{-1}(K)) \to (S^3,S^3-K). $$ We can assume that among all disks isotopic to $f$, $f$ has the minimal number of intersections points $f(D)\cap c$.
\begin{lem} \sl \label{fcdisj} The compressing disk does not intersect the curve $c$. \end{lem}
\begin{proof}[Proof of the Lemma]Assume it does. Then we look at intersection lines $f(D)\cap \partial S$. Each such curve represents an element $(m,l)\in H_1(\partial S) \cong {\mathbb Z}^2$, with $(1,0)$ corresponding to a standard meridian of $S$. We can assume that there are no
inessential curves, ie no curves representing $(0,0)$, because we can remove them by an isotopy of $f$. We call curves representing $(m,0)$ ($m\in {\mathbb Z}\backslash 0$) {\sl meridinal} curves, and all others except the trivial one {\sl longitudinal} curves. Since $|f(D)\cap c|$ is assumed minimal, meridinal curves are disjoint from $c$, so there is at least one longitudinal curve. The preimages of longitudinal curves are disjoint embedded circles in $D$, and we let $\delta$ be an innermost one. Then $\delta$ bounds a disk $\Delta$
in $D$ such that $f|_\Delta$ has only meridinal intersections with $\partial S$. These meridinal curves are noncontractible in $S-K$, and they have linking number $0$ with $K$, so a disk in $\Delta$ bounded by the preimage of a meridinal curve contains at least one positive and one negative hole. By Lemma \ref{branchedlem} there are at least three such meridinal curves. It follows that the framing disk has at least three positive and three negative holes, contradicting the hypothesis that the number of holes is minimal for its framing. \end{proof}
$S^3 \backslash c$ is a solid torus; since, by Lemma \ref{fcdisj}, $f(D)$ is disjoint from $c$, we can lift $K$ and the compressing disk $f$ to its universal cover. This is an open, infinite solid cylinder, and thus homeomorphic to ${\mathbb R}^3$ (see Figure \ref{oddKlift}).
The preimage of $K$ under the covering space projection consist of a ${\mathbb Z}$--family of link components $\ldots,L_{-1},L_0,L_1,L_2,\ldots$, but we simply take away the link components $\ldots,L_{-3}$, $L_{-2},L_{-1}$ (see Figure \ref{oddKlift}). We denote by $f'$ the lifting of the disk which sends $\partial D$ to $L_1$. $L_1$, regarded only as a closed curve, is noncontractible in ${\mathbb R}^3\backslash L_2$ and has linking number zero with $L_2$, so $f'$ has at least one positive and one negative intersection with $L_2$. It follows that $f$ has at least one positive and one negative hole.
We can now also forget about the link components $L_i$ for $i\geqslant 2$, and only consider $L_0$ and $L_1$.
\np
\phantom{\tiny .} \kern -\baselineskip
$$ \epsfbox{oddKlift.eps}
$$
\begin{figure}
\caption{The universal cover of $S^3-c$}
\label{oddKlift}
\end{figure} \np
\begin{lem} \sl The disk $f'$ has at least one positive intersection, either with $L_1$ or with $L_0$. \end{lem}
\begin{proof}[Proof of the Lemma]We embed a solid torus $T$ in ${\mathbb R}^3$ such that it contains $L_0$, as indicated. We can assume that $f'$ has the minimal number of intersection lines with $\partial T$ among all compressing disks isotopic to $f'$. There are two possibilities to consider. Either $f'$ does not intersect $T$. Then again by a covering space argument we see that $f'$ has at least two positive intersections with $L_1$. The other possibility is that $f'$ {\it has } intersections with $\partial T$. The preimages of these intersection lines are disjoint embedded circles in $D$, and we denote by $\gamma$ an innermost one of them; $\gamma$ bounds a disk $\Gamma\subseteq D$. Recall that $f'(\gamma)$ is assumed noncontractible in $\partial T$. So either $f'(\gamma)$ is a power of the $0$--longitude of $T$, in which case $f'(\gamma)$ has at least one positive and one negative intersection with $L_1$; or $f'(\gamma)$ has nonzero linking number with the core of $T$, in which case the compressing disk must intersect $L_0$. Since ${\rm lk}(L_1,L_0)=0$, we have at least one positive and one negative intersection with $L_0$. \end{proof}
In summary, $f'$ has at least one negative and one positive intersection with $L_2$, and at least one more positive intersection with $L_1$ or $L_0$, so $f$ has at least one negative and two positive holes. This completes the proof of Theorem \ref{mainth}.\end{proof}
The signature of a knot is always an even number. So in particular, the absolute value of the natural framing and the signature of a knot can definitely be different.
\section{The natural framing of torus knots}
In this section we prove that the natural framing of $T(p,q)$, the $(p,q)$--torus knot, is $-(p-1)(q-1)$. More precisely, we exhibit a compressing disk of $T(p,q)$ with $(p-1)(q-1)$ negative
and no positive
holes, and we prove that every compressing disk must have at least $(p-1)(q-1)$ negative holes. A very different, more constructive proof of the special case $q=2$ can be found in \cite{nat3}.
We start by reinterpreting the framing function. Let $K\subseteq S^3$ be an oriented knot, and let $G=\pi_1(S^3\backslash K)$ be the knot group. A {\sl positive} or {\sl negative Wirtinger generator} of $G$ is a path from the basepoint to the boundary of a tubular neighbourhood of $K$, then once around a meridian of the tubular neighbourhood according to the right or left hand rule respectively, and back along the first segment of path in the opposite direction. Let $x_1,\ldots,x_m$ be positive Wirtinger generators which together generate the knot group. Fix a path $\gamma$ from the boundary of a tubular neighbourhood of the knot to the basepoint. For $k\in {\mathbb Z}$ let $l_k\in G$ be the element represented by the path $\gamma^{-1}$ followed by the longitude with linking number $k$ with the knot followed by the path $\gamma$.
\begin{lem}\sl Take $\sigma_+,\sigma_-\in{\mathbb N}$. Let $k=\sigma_+-\sigma_-$ and $n=\sigma_++\sigma_-$. Then the following statements are equivalent:
\begin{enumerate}
\item[{\rm (i)}] There exists a compressing disk of $K$ with $\sigma_+$ positive and $\sigma_-$ negative holes.
\item[{\rm (ii)}] The longitude $l_k\in G$ is represented by a word $$ w_1^{-1}x_{i_1}^{\epsilon_1}w_1\kern.2em\ldots\
w_n^{-1}x_{i_n}^{\epsilon_n}w_n, $$ where each $w_i$ is a word in $\{x_i^{\pm 1}\}$, each $\epsilon_i=\pm 1$, and $\sum \epsilon_i = k$.
\end{enumerate}
\end{lem}
\begin{proof} Suppose that (i) holds. By retracting the compressing disk to a one-dimensional spine we see that the path $l_k$, which is the boundary of the compressing disk, is homotopic to a product of $\sigma_+$ positive and $\sigma_-$ negative Wirtinger generators. (ii) follows. Conversely, we can use a homotopy between a product of $\sigma_+$ positive and $\sigma_-$ negative Wirtinger generators and the path $l_k$ to construct a compressing disk with $\sigma_+$ positive and $\sigma_-$ negative holes.\end{proof}
Thus we can reinterpret the framing function $n_K\co {\mathbb Z} \to {\mathbb N}$ as follows. For $k\in {\mathbb Z}$ we let $$
n_K(k)=\min\{n\in {\mathbb N}\kern.2em|\ l_k \hbox{ is represented by a word in the above form}\}. $$ Note that $n_K(k)$ is independent of the choice of $\gamma$. Roughly speaking, we are trying to express the longitude with linking number $k$ as a shortest possible product of conjugates of the generators $x_1,\ldots,x_m$.
We call a finite presentation of a knot group in which all generators are Wirtinger generators a {\sl Wirtinger presentation}. The Cayley graph $\Gamma$ associated to such a presentation has natural `layers' corresponding to the elements' images under the natural map to $H_1(S^3\backslash K)\cong {\mathbb Z}$. Multiplying a given element of the knot group by any conjugate of a positive or negative Wirtinger generator corresponds to `stepping one layer up' or `down' respectively in $\Gamma$. There are $k$ conjugates algebraically in a word representing $l_k$. In the cases below, we shall be trying to use as few as possible, so we want to avoid taking steps `down' in $\Gamma$, ie using negative conjugates.
Consider the {\sl positive cone} from the identity in $\Gamma$, the set of elements which may be written as a product of positive conjugates. We want to know how close $l_k$ gets to this cone as $k$ increases; if we can show that, however large $k$ gets, $l_k$ still requires steps down, we will obtain a negative upper bound on the natural framing of $K$. For the left-hand trefoil, for example, we prove that $l_k$ requires at least 2 steps down for any $k$, corresponding to 4 extra holes of a compressing disk; hence $n(k)\geqslant k+4$, and, since we know that for the trefoil $n(-2)=2$,
we see $\nu(T(3,2))=-2$.
We shall show that $\nu(T(p,q))=-(p-1)(q-1)$ in two theorems, first for the case $q=2$ and then for $q\geqslant2$. The first proof is really a degenerate case of the second, but we introduce the ideas used in both theorems in the simpler context of $q=2$ and leave the additional calculation to the second case, where it first becomes necessary.
\begin{theorem}\sl The natural framing of $T(p,2)$ is $-(p-1)$; indeed, its framing function is given by $n(k)=(p-1)+|k+(p-1)|$. \end{theorem}
\begin{figure}
\caption{Generators of the knot group of the $(p,2)$--torus knot}
\label{p2knot}
\end{figure}
\begin{proof} Fix $p$. Draw $T(p,2)$ in the usual way in a diagram with $p$ negative crossings and $p$--fold symmetry as in Figure \ref{p2knot}. Label the overcrossing arcs $x_0$, $x_1$, $\ldots,$ $x_{p-1}$ clockwise around the diagram. Then the fundamental group of the complement is given by $$
G={<}x_0,x_1,\ldots,x_{p-1}\kern.08em|\>x_0x_{p-1}=\ldots=x_2x_1=x_1x_0{>}, $$ and the word $$ x_0^{k+p}x_1^{-1} x_3^{-1}\ldots\>x_{p-2}^{-1} x_0^{-1} x_2^{-1}\ldots\>x_{p-1}^{-1} $$ represents the longitude $l_k\in G$. Notice that $n_K(-(p-1))=p-1$; this follows from the existence of the disk drawn in Figure \ref{mtgdisk}, which has $p-1$ negative holes and no positive
ones, and the fact that $n(k)\geqslant|k|$.
\begin{figure}
\caption{A disk with $p-1$ negative holes and no positive ones}
\label{mtgdisk}
\end{figure}
Hence $n(k)=-k$ for $k\leqslant -(p-1)$.
If we could show that, given $k$, there was a possibly larger integer (which instead we call $k$) such that $n(k)\geqslant k+2(p-1)$, we would know the entire framing function.
We aim to capture something of the geometry of the Cayley graph $\Gamma$ of $G$ with the above presentation, and show that $l_k$ is always `hard to get to'. As an example, take $p=3$ and consider just a small portion of $\Gamma$, those vertices which may be written as the product of at most two (positive) generators. Three of these words coincide as group elements; apart from that, they are all different. This portion may be embedded in ${\mathbb R}^3$ as shown in Figure \ref{cayleyg}.
\begin{figure}
\caption{Embedding the Cayley graph of $G$ in ${\mathbb R}^3$}
\label{cayleyg}
\end{figure}
\medbreak\noindent We may extend this embedding in a consistent way to the whole of $\Gamma$. (The exact meaning of `consistent' is given implicitly by the definition of $\theta$ below.) Arranged thus, $\Gamma$ projects vertically down onto an infinite $p$--valent tree, which we can think of as the Cayley graph of
$$ I_p:=*_p{\mathbb Z}_2
={<}0,1,\ldots,p-1\kern.08em|\>00=11=\ldots=(p-1)(p-1)=e{>}. $$ (See below for a picture of this group's Cayley graph.) To realise this projection, we want a function $\theta\co G\rightarrow I_p$. We shall find that $\theta$ is {\it not} a homomorphism. We first define $\theta$ on $X
$, the set of words in the symbols $x_0$, $x_1$, $\ldots,$ $x_{p-1}$ and their inverses: $$ \theta\co x_{i_0}^{\epsilon_0}x_{i_1}^{\epsilon_1}\ldots
x_{i_{s-1}}^{\epsilon_{s-1}}\mapsto
\prod_{j=0}^{s-1}(i_j+h_j), $$ where each $i_j\in\{0,1,\ldots,p-1\}$, each $\epsilon_j=\pm1$, and the $j$th `height' is $$ h_j={\epsilon_j-1\over2}+
\sum_{k=0}^{j-1}\epsilon_k $$ (all addition modulo $p$). For example, when $p=3$, the word $x_1x_2x_1x_0^{-1}x_0x_2$ maps to $12$:
\vbox{\parskip=1pt
\noindent
\def\ph#1{\phantom{$\nearrow$}
\llap{\hbox to0pt{\hss#1\hss\phantom{$\nearrow$}}}} \def\phantom{$\nearrow$}{\phantom{$\nearrow$}}
\def$\nearrow${$\nearrow$} \def$\searrow${$\searrow$} \newdimen\six\six=6cm\advance\six by-2.5truemm \def\diagplease{-9 1.5 moveto 9 5 lineto .4 setlinewidth stroke}{\diagplease{-9 1.5 moveto 9 5 lineto .4 setlinewidth stroke}}
\noindent \kern\six\llap{\mlap{$i_j$}\k{13}}\ph{1}\ph{2}\ph{1}\ph{0}
\ph{0}\ph{2}\kern.2em\kern.2em+ \vglue2truemm {\begingroup\par\nointerlineskip\newdimen\backup\backup=-1.7pt \noindent\kern\six\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}$\nearrow$\par\vglue\backup \noindent\kern\six\phantom{$\nearrow$}\phantom{$\nearrow$}$\nearrow$$\searrow$$\nearrow$\phantom{$\nearrow$}\par\vglue\backup \noindent\kern\six
\llap{\mlap{\smash{
\raise1.5truemm\hbox{\k{.5}$\displaystyle\sum_{k=1}^j\epsilon_k$}}}\k{13}}
\phantom{$\nearrow$}$\nearrow$\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\par\vglue\backup \noindent\kern\six$\nearrow$\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\par \endgroup} \vglue2truemm \noindent\kern\six\llap{\mlap{$h_j$}\k{13}}\ph{0}\ph{1}\ph{2}\ph{2}
\ph{2}\ph{0}\par \vglue1truemm \noindent\kern\six$\overline{\hbox{\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}\phantom{$\nearrow$}}}$\par \vglue-3mm \noindent\kern\six\llap{\mlap{$i_j+h_j$}\k{13}}\ph{1}\ph{0}\diagplease{-9 1.5 moveto 9 5 lineto .4 setlinewidth stroke}\ph{0}
\ph{2}\diagplease{-9 1.5 moveto 9 5 lineto .4 setlinewidth stroke}\ph{2}\ph{2}
${}=12\in I_3$.
}
As this example shows, changing a word in $X$ by an elementary expansion or reduction (that is, insertion or deletion of a pair $x_i{x_i}\!^{-1}$ or ${x_i}\!^{-1}x_i$) does not change its image under $\theta$, since the two adjacent letters involved have the same index and height, and so map to a repeated element in $I_p$. Also, $x_{i+1}x_i$ maps to the identity at any height for any $i$, so changing our word in $X$ by a relator of $G$ leaves its image under $\theta$ unchanged. Hence $\theta$ is well-defined as a map from $G$ to $I_p$.
Suppose we have a word $$ W=\yconj1\kern.2em\yconj2\kern.2em\ldots\kern.2em\yconj{n} $$
which represents $l_k$, where $n=k+2t$ and each $y_i\in\{x_0,x_1,\ldots,x_{p-1}\}$. By adding small positive umbrellas, say by premultiplying $W$ by a power of $x_0$, we may assume that $k$ is a large positive multiple of $p$. (This is for notational convenience and to remove a special case later. Notice that $t$ is unaffected.) Since $W$ represents $l_k$,
$$ \vbox{ \halign{\kern-9mm\kern8.5truemm
$#$&
$#$
& $#$
&
$#$
&
$#$
&
$#$
&
$#$
&
$#$
&\kern.2em
$#$
\kern.2em&
$#$
&
$#$
&
$#$
&\kern.2em
$#$
\kern.2em&
$#$
&
$#$
&
$#$
&$#$
\cr \theta(W)&{}=\theta\bigl(&(&x_0&x_0&\kern.04em\ldots\kern.04em&x_0&)^{k\kern-.07em/\kern-.07em p\kern.16em +1}&
x_1^{-1}&x_3^{-1}&\kern.04em\ldots\kern.04em&x_{p-2}^{-1}&x_0^{-1}&x_2^{-1}&
\kern.04em\ldots\kern.04em&x_{p-1}^{-1}&\bigr)\cr \noalign{
} &$\llap{\hbox to1em{\hss$i_j$\hss}}$&(&0&0&&0&)^{k\kern-.07em/\kern-.07em p\kern.16em +1}&
1&3&&p-2&0&2&&p-1&\cr &$\llap{\hbox to1em{\hss$h_j$\hss}}$&(&0&1&&p-1&)^{k\kern-.07em/\kern-.07em p\kern.16em +1}&
p-1&p-2&&\frac{p+1}2&\frac{p-1}2&\frac{p-3}2&&0&\cr \noalign{\vglue-1mm} &&$\rlap{$\overline{\hbox to7cm{
}}$}$&&&&&&&&&&&&&&
$\llap{$\overline{\hbox to7cm{
}}$}$\cr &$\llap{\hbox to1em{\hss$i_j+h_j$\hss}}$&(&0&1&\kern.04em\ldots\kern.04em&p-1&)^{k\kern-.07em/\kern-.07em p\kern.16em +1}&
0&1&\kern.04em\ldots\kern.04em&\frac{p-3}2&\frac{p-1}2&\frac{p+1}2&\kern.04em\ldots\kern.04em&p-1&\cr \noalign{
} &\rlap{${}=\bigl(0\,1\kern.2em\ldots\kern.2em(p-1)\bigr)^{k\kern-.07em/\kern-.07em p\kern.16em +2}\in I_p$.}&&&&&&&&&&&&&&&\cr } } $$
If the Cayley graph of $I_p$ with the above generators is drawn in the plane with edges consistently labelled 0 to $(p-1)$ anticlockwise round each vertex, we find that $\theta(l_k)$ turns sharp right at every step, and follows the boundary of one of the infinite complementary regions. For example, in the case $p=3$ and $k=0$ we have $\theta(l_k)=012012$, so the graph looks as in Figure \ref{hyperb}.
\begin{figure}
\caption{The Cayley graph of the group $I_p$}
\label{hyperb}
\end{figure}
\noindent This suggests defining the {\df angle} $a(v)$ of a non-trivial reduced word $v$ in the symbols $\{0,1,\ldots,p-1\}$, which we think of as `turn-right-ness', as $$ a(i_0i_1\ldots i_{s-1}):=\sum_{j=1}^{s-1}(p-2d_j), $$ $$ \hbox{ \ where }
i_{j-1}+d_j\equiv i_j\hbox{ (mod $p$) and }d_j\in\{1,2,\ldots,p-1\}. $$ Thus each step $i_{j-1}$ to $i_j$ contributes between $-p+2$ and $p-2$ to the angle, and the more often and more sharply a word `turns right', the greater this angle. Define the {\sl angle} of an element of $G$ as the angle of its reduced image under $\theta$. Since $\theta(l_k)$ turns sharp right $(k+2p-1)$ times, the angle is $(k+2p-1)(p-2)$; this will prove unusually high for its exponent sum. Notice that the angle of an element of $I_p$ is unchanged if we cycle the generators in its expression modulo $p$; that is, $$ a(i_0i_1\ldots i_{s-1})=a\Bigl((i_0+1)(i_1+1)\ldots(i_{s-1}+1)\Bigr). $$
Let $W_0$ be the large power of $x_0$ we multiplied our original word by, and define inductively $W_i=W_{i-1}\yconj i$, so $W_n=W$. We ask how the angles of these initial segments increase with $i$. Suppose we are stepping from $W_{i-1}$ to $W_i$. Since the angle is defined only for reduced words, we must cancel $\theta(W_i)$ down to its reduced form; we therefore pretend that $W_i$ is a word in the letters $\{x_j^{\pm1}\}$, and that $\theta(W_i)$ is a word in the generators of $I_p$ (the letter-by-letter image of $W_i$), and do this reduction in stages.
First, perform all elementary reductions on $\theta(W_{i-1})$, the initial segment. The total angle for this segment is $a(W_{i-1})$. Also, let $v'$ and $v$ be the reduced forms of the words obtained from the segments $w_i^{-1}$ and $w_i$, respectively---notice that $v$ is the reverse of $v'$ cycled by $\epsilon_i$, so $a(v')+a(v)=0$ (ie the angles for these segments cancel).
Next, consider the image of the whole segment $\yconj i$. We currently have this in the form $v'zv$, say, for some $0\leqslant z<p$. This looks somewhat like a conjugate of $z$ in $I_p$, except that the generators in $v$ have been cycled by ${\epsilon_i}$ modulo $p$. The words $v$ and $v'$ may already be trivial. If not, write $z'$ for the last letter of $v'$; then the first letter of $v$ is $(z'+{\epsilon_i})$. If $z$ equals $z'$ or $(z'+\epsilon_i)$, we may shorten $v$ and $v'$ and change $z$, and still have an expression of the same form (and notice that two cancelling angles have been removed from the word). For example, in $I_3$, $(0212)0(0201)=(021)2(201)=(02)1(01)$. Thus $\theta(\yconj i)$ cancels down to one of the four forms below. The only contributions to the total angles for these segments come from the steps either side of $z$, and so are as shown.
\nointerlineskip\goodbreak $$ \vbox{ \halign{\k{0}
\hfil#&{\csc\kern.2em#}:\k7&\hfil#&
#\hfil\k7&\hfil#&#\hfil\k7&\hfil#&#\hfil\cr case&a&${\epsilon_i}={}$&$+1$&$\theta(\yconj i)={}$&$z$&$a(\yconj i)={}$&0\cr \noalign{
} &b&&$+1$&&$v'zv$&&$-2$\cr \noalign{
} &c&&$-1$&&$z$&&0\cr \noalign{
} &d&&$-1$&&$v'zv$&&2\cr } } $$
The only contribution to the angle of the whole word $\theta(W_i)$ we have not yet considered comes from the boundary between the images under $\theta$ (reduced as described) of $W_{i-1}$ and $\yconj i$. If there is no cancellation at this position in the word, the contribution to the angle of $W_i$ from this junction is at most $(p-2)$. If the second part of $\theta(W_i)$ is fully absorbed by the first, the same is true, since each elementary reduction before the last removes two equal and opposite contributions to the angle, and the last removes just one, which is certainly at least $(-p+2)$. Notice that in cases {\csc a} and {\csc c} these are the only two possibilities. Notice also that we may assume that the first part is never completely absorbed by the second, by increasing the framing beforehand as described above.
This leaves the case where there is partial cancellation at this boundary. We have so far reached $$ \underbrace{\ldots j\,k_0k_1\ldots k_{s-1}}
_{\hbox{\phantom{$\scriptstyle\yconj i$}
$\scriptstyle\theta(W_{i-1})$\phantom{$\scriptstyle\yconj i$}}} \k{-1}\underbrace{k_{s-1}\ldots k_1k_0l\ldots}
_{\hbox{$\scriptstyle\theta(\yconj i)$}}\kern.04em, $$ say, where $j\not=l$. Cancelling this down, and ignoring pairs of equal and opposite angles which vanish in the process, we lose the angle contributions from $jk_0$ and $k_0l$, and gain instead that from $jl$. This changes the angle by $\pm p$. Notice, however, that if $j+1=k_0$ the angle must decrease.
In summary, then, changes from $W_{i-1}$ to $W_i$ of types {\csc a}, {\csc b}, or {\csc c} increase the angle by at most $(p-2)$, and those of type {\csc d} increase it by at most $(p+2)$. The angle of $l_k$ is unusually high for its framing $k$; to see this, let ${\rm esum}(W_i)$ be the sum of the exponents in $W_i$, ie the linking number of a path representing $W_i$ with $K$, and consider $$ c(W_i):=a(W_i)-(p-2)\bigl({\rm esum}(W_i)-1\bigr). $$ This is only increased in cases {\csc c} and {\csc d}, by $(2p-4)$ and $2p$, respectively. Recall that $a(l_k)=(k+2p-1)(p-2)$, so $c(W_n)=2p(p-2)$, so we require at least $(p-2)$ {\csc c}s or {\csc d}s and the same total number of {\csc a}s and {\csc b}s. Since our word $W$ consisted of $(k+t)$ positive conjugates and $t$ negative conjugates, we see that $t\geqslant p-2$.
Suppose $t=p-2$. Then $c$ must increase by a full $2p$ for each negative conjugate, so there are no steps of type {\csc c}. Let $i$ be the first time we encounter a case other than {\csc a}. All previous steps {\it must} have been multiplication by $x_0$ in $G$ (since the word must turn sharp right each time to avoid decreasing $c(W_i)$). Hence the word so far is just a power of $x_0$. But then at the boundary between $W_{i-1}$ and $\yconj i$ we cannot increase $a$ by $p$, so $c$ must decrease. Hence $t>p-2$, so $n\geqslant k+2(p-1)$ and we are done.\end{proof}
\begin{theorem}\sl The natural framing of $T(p,q)$ is $-(p-1)(q-1)$; indeed, its framing function is given by
$n(k)=(p-1)(q-1)+\bigl|k+(p-1)(q-1)\bigr|$. \end{theorem}
\begin{figure}
\caption{Generators of the knot group of the $(p,q)$--torus knot}
\label{mtgpq}
\end{figure}
\begin{proof} Fix $p$ and $q$ coprime with (say) $p>q$. From the diagram for $T(p,q)$ shown in Figure \ref{mtgpq} we obtain the presentation $$
G={<\kern.04em}x_0,\,x_1,\kern.04em\ldots,\,x_{p-1}\kern.08em|\>x_{q-2}\ldots x_0x_{p-1}=
\kern.2em\ldots\kern.2em=x_q\ldots x_2x_1=x_{q-1}\ldots x_1x_0{\kern.04em>} $$ for $\pi_1(S^3\backslash K)$. Now $l_k$ is the group element represented by the word $$ x_0^{k+p(q-1)}x_1^{-1}x_2^{-1}\ldots x_{q-1}^{-1}x_{q+1}^{-1}\ldots
x_{2q-1}^{-1}x_{2q+1}^{-1}\kern.2em\kern.2em\ldots\kern.2em\ x_{p-1}^{-1}. $$
It is less clear geometrically in this more general case that we can find a disk corresponding to the value $n\bigl(-(p-1)(q-1)\bigr)=(p-1)(q-1)$ of the framing function of $T(p,q)$. Seen algebraically, however, we are trying to write the above word as a product of negative conjugates (again, all indices modulo $p$) when $k=-(p-1)(q-1)$. There are $(q-1)$ $x_0^{-1}$s in this expression. Conjugate each other inverse of a generator by $x_0^{-1}$ to the power of the number of $x_0^{-1}$s after it; then elementary reduce the $x_0$s and $x_0^{-1}$s. This gives an expression of the required form. For example, for $T(5,3)$,
\goodbreak \vbox{ $$ \def\kern.04em{\kern.04em} \def\kern.08em{\kern.08em} \def\kern.2em{\kern.2em} \halign{\k{3}\hfil#&$x_0^2\>x_1^{-1}$#\hfil&$\>x_2^{-1}$#\hfil&
$\>x_4^{-1}$\hfil#\hfil&$x_2^{-1}$#\hfil&$x_3^{-1}\kern.04em
x_0^{-1}\kern.2em\ x_1^{-1}\kern.2em\ x_3^{-1}\kern.2em\ x_4^{-1}$#\hfil\cr &&&$x_0^{-1}$&&\cr ${}={}$&$\,x_0^{-2}\kern.2em\ x_0^2$&$x_0^{-2}\kern.2em\ x_0^2$&
$\,x_0^{-2}\kern.2em\ x_0^{}\kern.04em$&$\,x_0^{-1}\kern.2em\ x_0^{}\kern.04em$&.\cr }
$$ }
We would like a version of $\theta$ for this case; it must map a sequence of generators with gradually decreasing index of length $q$ (rather than 2) to the identity. This suggests defining $\theta$ from $G$ to
$$
\hbox{$*_p\Z_q$}:={<\kern.04em}0,\ 1,\kern.2em\ldots,\ p-1\kern.08em|\>0^q=1^q=\ldots=(p-1)^q=e{\kern.04em>}. $$ As before, we first define $\theta$ on $X$, the set of words in the symbols $x_0$, $x_1$, $\ldots,$ $x_{p-1}$ and their inverses: $$ \theta\co x_{i_0}^{\epsilon_0}x_{i_1}^{\epsilon_1}\ldots
x_{i_{s-1}}^{\epsilon_{s-1}}\mapsto
\prod_{j=0}^{s-1}(i_j+h_j)^{\epsilon_i}, $$ where each $i_j\in\{0,1,\ldots,p-1\}$, each $\epsilon_j=\pm1$, and the height is again given by $$ h_j={\epsilon_j-1\over2}+
\sum_{k=0}^{j-1}\epsilon_k $$ (all addition modulo $p$). Notice that this time we keep track of inverses of 0, 1, etc---in the involutary case, this was unnecessary.
Adjacent letters in a word in $X$ with the same index but of opposite sign have the same height, and so map to a cancelling pair in \hbox{$*_p\Z_q$}. Applying a relator of $G$ to a word $v$ in $X$ replaces, in $\theta(v)$, the $q$th power of some generator of \hbox{$*_p\Z_q$}\ by that of another.
Therefore $\theta$ is well-defined as a map from $G$ to \hbox{$*_p\Z_q$}.
Suppose we take a framing disk. The longitude it follows is $$ l_k=x_0^{k+p(q-1)}x_1^{-1}x_2^{-1}\ldots x_{q-1}^{-1}x_{q+1}^{-1}\ldots
x_{2q-1}^{-1}x_{2q+1}^{-1}\kern.2em\kern.2em\ldots\kern.2em\ x_{p-1}^{-1} $$ (all indices modulo $p$), where $k$ may be assumed to be a large positive multiple of $p$ by adding positive umbrellas as necessary. This maps under $\theta$ to $$
\begin{array}{r@{}c@{}l} \theta(l_k)&{}={}&\bigl(01\ldots(p-1)\bigr)^{k\kern-.07em/\kern-.07em p\kern.16em +q-1}
0^{-(q-1)}1^{-(q-1)}\ldots(p-1)^{-(q-1)}\\
&{}={}&\bigl(01\ldots(p-1)\bigr)^{k\kern-.07em/\kern-.07em p\kern.16em +q}.
\end{array} $$ The `angle' alone is now too small to give a tight bound on the natural framing of $T(p,q)$; we therefore introduce new ideas to make up the difference.
Since the negative powers of generators of \hbox{$*_p\Z_q$}\ cancel so neatly in $\theta(l_k)$, we want this cancelling to `score extra'. An element of \hbox{$*_p\Z_q$}\ may be written in the {\df standard form} $$ \matrix{ \rlap{\phantom{$\Big($}} i_0^{e_0}i_1^{e_1}\ldots i_{s-1}^{e_{s-1}},\cr \rlap{\phantom{$\Big($}}\hbox{with each $i_j$ a generator, $s$ minimal,
and each $e_j\in\{1,2,\ldots,q-1\}$.}\cr } $$ Adjacent $i_j$s are then different. This allows us to define {\df angle} for such a standard form of a non-trivial word, much as before, by $$ a(i_0^{e_0}i_1^{e_1}\!\ldots i_{s-1}^{e_{s-1}}):=
\sum_{j=1}^{s-1}(p-2d_j), $$ $$ \hbox{where }
i_{j-1}+d_j\equiv i_j\hbox{ (mod $p$) and }d_j\in\{1,2,\ldots,p-1\}. $$ In addition, define the {\df pseudo-exponent} of $w\in G$ to be the sum of the exponents in the standard form of $\theta(w)$. The {\df exponent sum of $w$} is the sum of exponents when $w$ is written in terms of the $x_i$s. Then the {\df excess exponent}, $e(w)$, is defined to be the pseudo-exponent minus the exponent sum of $w$. Notice that $e(w)$ is a multiple of $q$. Finally, to register how close the excess exponent is to changing, define the {\df internal angle} $\iota$ of a word in standard form by $$ \iota(i_0^{e_0}i_1^{e_1}\ldots i_{s-1}^{e_{s-1}}):=
\sum_{j=0}^{s-1}(e_j-1). $$ Then the replacement we use for angle, which we call {\df angle\k{.1}$'$} and denote $a'$, is given by $$ a'(w):=a(w)\kern.04em-\,p\times\iota(w)\kern.04em+\kern.04em\frac1q(q-2)p\times e(w). $$
Notice that $a'$ is well-defined as a map from $G$ to ${\mathbb Z}$, since any element of \hbox{$*_p\Z_q$}\ has a unique standard form. Angle$'$ is {\it not} defined on \hbox{$*_p\Z_q$}, even though angle and internal angle are, because the excess exponent of an element is undefined. The angle$'$ of $l_k$ is $$
\begin{array}{r@{}c@{}l} a'(l_k)&{}={}&(k+pq-1)(p-2)-0+\frac1q(q-2)p(k+pq-k)\\
&{}={}&(k-1)(p-2)+2p\bigl((p-1)(q-1)-1\bigr),\\kern.2em\end{array} $$ and this will turn out to be large enough to give the wanted tight upper bound on the natural framing.
Suppose we have a word $$ W=\yconj1\kern.2em\yconj2\kern.2em\ldots\kern.2em\yconj{n} $$
which represents $l_k$, where $n=k+2t$ and each $y_i\in\{x_0,x_1,\ldots,x_{p-1}\}$.
As before, we may assume that $k$ is a large positive multiple of $p$. Let $W_0$ be the large power of $x_0$ which $W$ starts with, and define $W_i=W_{i-1}\yconj i$.
Although these are really group elements, we write them out literally in the letters $\{x_i^{\pm1}\}$ and pretend that they are just words. We use the literal version of $\theta$ to map them to \hbox{$*_p\Z_q$}. Only then do we do any simplification, ie elementary reduction and multiplication by $q$th powers of generators. In this way, we can monitor the total angle$'$ of the result. We shall often calculate the angle$'$ of a segment of a word---this is again simply the angle$'$ of the segment taken in isolation, not counting contributions to angle or internal angle from either end. The angle$'$ of a set of segments of a word is the sum of the angle$'$s for each segment.
We know that simplifying the $W_{i-1}$ segment of $W_i$ gives angle$'$ $a'(W_{i-1})$. Consider next $w_i^{-1}$ and $w_i$. Taken as words, these map to, say, $$ \displaylines{ v':=v_0^{e_0}v_1^{e_1}\ldots v_{s-1}^{e_{s-1}}\cr \lwd{and}v:=(v_{s-1}+\epsilon_i)^{-e_{s-1}}\ldots
(v_1+\epsilon_i)^{-e_1}(v_0+\epsilon_i)^{-e_0}\cr } $$ (addition modulo $p$) respectively. We would like to show that these two expressions, taken together, contribute a total of 0 to $a'$; however, we must convert them to standard form before we can measure this contribution. (By slight abuse of notation, we continue to call them $v'$ and $v$ through the stages of this standardisation. We are effectively showing that $a'(g)=-a'(g^{-1})$ for any $g\in G$.)
We may assume that each $e_j$ lies in $\{0,1,\ldots,q-1\}$ (by repeatedly multiplying by \smash{$v_j^{\pm q}$} in $v'$ and \smash{$(v_j+\epsilon_i)^{\mp q}$} in the corresponding place in $v$), and that none is 0 (by reducing $s$). To finish standardisation, we must, for each $j$, write the negative powers of $(v_j+{\epsilon_i})$ as positive powers by multiplying by $(v_j+\epsilon_i)^q$ in the correct place in $v$. The pseudo-exponent then becomes $qs$, so $e(w_i^{-1})+e(w_i)=qs$. For each $j$ we have $e_j$ $v_j$s in $v'$ and $(q-e_j)$ $(v_j+\epsilon_i)$s in $v$, so the total internal angle, $\iota(v')+\iota(v)$, is $(q-2)s$. We already know the total angle is zero---the contribution from $v_jv_{j+1}$ in $v'$ cancels with that from $(v_{j+1}+\epsilon_i)(v_j+\epsilon_i)$ in $v$---so the total angle$'$ for these two segments is, as wanted, $$ a'(v')+a'(v)=0\kern.04em-\,p\times(q-2)s\kern.04em+\kern.04em\frac1q(q-2)p\times qs=0. $$
Next we consider the whole second segment of $\theta(W_i)$, namely $\theta(\yconj i)$, where the images under $\theta$ of $w_i^{-1}$ and $w_i$ have already been simplified as above. Let $z^{\epsilon_i}$ be the image under $\theta$ of $y_i^{\epsilon_i}$. Perhaps $s=0$ in the above expressions. If so, $\theta(\yconj i)=z^{\epsilon_i}$, which is already in standard form if ${\epsilon_i}=+1$, and may be written in the standard form $z^{q-1}$ when ${\epsilon_i}=-1$ making $e=q$ and $\iota=q-2$; both these expressions have angle$'$ 0.
Consider the case $s>0$. By cycling the whole image modulo $p$, we may assume, for notational convenience, that $v_{s-1}$ and $(v_{s-1}+{\epsilon_i})$ are 0 and 1 in some order. The only extra contributions to $a'$ are those involving $z^{\epsilon_i}$. We watch how the angle$'$ changes from $a'(v')+a'(v)$ to the angle$'$ of $v'zv$ after cancellation.
Suppose first that $\epsilon_i=+1$, so we find $\ldots0z1\ldots$ in the middle of $\theta(\yconj i)$. If $z$ is not 0 or 1, then $\iota$ and $e$ are unchanged and $a$ increases by $(p-2z)+(p-2(p-z+1))=-2$ (by definition, since $z$ and $(p-z+1)$ are both in $\{1,2,\ldots,p-1\}$). It follows that $a'(\yconj i)=-2$. If $z=0$ and $e_{s-1}<q-1$, or if $z=1$ and $e_{s-1}>1$, we find $e$ unchanged, $\iota$ increased by 1, and $a$ increased by $(p-2)$; again, $a'(\yconj i)=-2$. Finally, if $z=0$ and $e_{s-1}=q-1$, or if $z=1$ and $e_{s-1}=1$, we may pretend $\theta(\yconj i)$ is really a (possibly trivial) conjugate of 1 (the first letter of $\theta(w_i)$) or 0 (the last letter of $\theta(w_i^{-1})$) respectively with smaller $s$. This simplification removes two equal and opposite angles, reduces $e$ by $q$, and reduces $\iota$ by $(q-2)$, leaving $a'$ unchanged. By induction, cancellation of this kind terminates in one of the cases already considered.
Suppose instead that $\epsilon_i=-1$, so we find $\ldots1z^{-1}0\ldots$ in the middle of the word $\theta(\yconj i)$. We may write the central $z^{-1}$ as $z^{q-1}$, since this increases $\iota$ by $(q-2)$ and $e$ by $q$, leaving $a'$ unchanged. Now, if $z$ is not 0 or 1, then $\iota$ and $e$ are unaltered and the angle $a$ increases by $(p-2(z-1))+(p-2(p-z))=2$ (as above, since now $(z-1)$ and $(p-z)$ are in $\{1,2,\ldots,p-1\}$). In this case $a'(\yconj i)=2$. If $z=1$ and $e_{s-1}>1$ (or if $z=0$ and $e_{s-1}<q-1$) then a block of $q$ 1s (or 0s) may be removed from the centre, leaving $\ldots1^{e_{s-1}-1}0^{q-e_{s-1}}\ldots$ (or $\ldots1^{e_{s-1}}0^{q-e_{s-1}-1}\ldots$). In this middle segment, $e$ is reduced by $q$, $\iota$ is reduced from $(q-2)+(q-2)$ to $(q-3)$, and $a$ is increased by $(p-2)$; again, $a'(\yconj i)=2$. Finally, if $z=1$ and $e_{s-1}=1$, or if $z=0$ and $e_{s-1}=q-1$, we may pretend that $\theta(\yconj i)$ is really a (possibly trivial) conjugate of $0^{q-1}$ (the first $(q-1)$ letters of $\theta(w_i)$) or $1^{q-1}$ (the last $(q-1)$ letters of $\theta(w_i^{-1})$) respectively with smaller $s$. Again, the simplification removes two equal and opposite angles, reduces $e$ by $q$, and reduces $\iota$ by $(q-2)$, and therefore leaves $a'$ unchanged; hence, by induction, cancellation of this kind terminates in one of the cases already considered.
Thus we have essentially the same set of cases as before:
$$ \vbox{ \halign{\k{0}
\hfil#&{\csc\kern.2em#}:\k7&\hfil#&
#\hfil\k7&\hfil#&#\hfil\k7&\hfil#&#\hfil\cr case&a&${\epsilon_i}={}$&$+1$&$\theta(\yconj i)={}$&$z$&$a'(\yconj i)={}$&0\cr \noalign{
} &b&&$+1$&&$v'zv$&&$-2$\cr \noalign{
} &c&&$-1$&&$z^{q-1}$&&0\cr \noalign{
} &d&&$-1$&&$v'z^{q-1}v$&&2\cr } }
$$
Finally, there is the junction between $\theta(W_i)$ (say ending in $x$) and $\theta(\yconj i)$ (say beginning with $y$) to consider, where each part here has been written in standard form as described above. The total angle$'$ (ie the sum of the angle$'$s) of these segments, taken separately, is $a'(W_i)$ or $a'(W_i)\pm2$, depending on the form of $\theta(\yconj i)$. If $x\not=y$, the juxtaposition of these strings is already in standard form; then $\iota$ and $e$ remain the same and $a$ increases by at most $(p-2)$, so $a'$ increases by at most $(p-2)$.
Suppose, then, that $x=y$. Some cancelling may occur at the boundary---if it does, we continue to call the shorter (standard form) words whose product is $\theta(W_{i+1})$ `the first word' and `the second word', even though they change. Say the first word ends in $\ldots x'x^j$, where $x'\not=x$. (We may always assume that we can find such an $x'$, by having premultiplied $W$ by a sufficiently high power of $x_0$ before we started.) The second may begin with $x^kx''$, where $x\not=x''$, or may simply be of the form $x^k$. Hence we have $$ \hbox{either\k4}\ldots x'x^j\ x^kx''\!\ldots\k4\hbox{or}\k4\ldots x'x^j\ x^k $$ at the boundary. Since the words are in standard form, $j$ and $k$ are both in $\{1,2,\ldots,q-1\}$. Compare $(j+k)$ with $q$:
\newdimen\nicestartskip \setbox0\hbox{$j+k=q$} \nicestartskip=\wd0 \def\nicestart#1{\leavevmode\hbox to\nicestartskip{$j+k$\hss$#1$\hss$q$}:\k5}
{
\parindent0em\hangindent2em \nicestart< The juxtaposition of these strings is $\k1\ldots x'x^{j+k}x''\!\ldots,$ which is already in standard form, so $\iota$ increases by 1 and $a'$ drops by $p$.
\parindent0em\hangindent2em \nicestart> The juxtaposed strings cancel to $\k1\ldots x'x^{j+k-q}x''\!\ldots,$ so $e$ decreases by $q$, but $\iota$ drops by $(q-1)$, so $a'$ increases by $p$.
\parindent0em\hangindent2em \nicestart= $x^j$ and $x^k$ cancel, reducing $e$ by 1 and $\iota$ by $(q-2)$, and hence leaving $a'$ unchanged. Then one of the following holds.
\parindent4em\hangindent4em \leavevmode\llap{(i)\k3} There is no $x''$, so the word becomes $\k1\ldots x'\!$, $a$ drops by at least $(-p+2)$, and $e$ and $\iota$ are unchanged, so $a'$ increases by at most $(p-2)$.
\parindent4em\hangindent4em \leavevmode\llap{(ii)\k3} $x'\not=x''$, so the string becomes $\k1\ldots x'x''\!\ldots,$ $e$ and $\iota$ are unchanged, and $a$ (and hence $a'$) changes by either $+p$ or $-p$ (certainly $-p$ if $x'+1=x$).
\parindent4em\hangindent4em \leavevmode\llap{(iii)\k3} $x'=x''$, and we can induct with shorter words.
}
The largest possible change to $a'$ from this boundary is therefore $p$, as before, so we find that $a'(W_{i+1})-a'(W_i)$ is at most $(p-2)$, $(p-2)$, $p$, and $(p+2)$ in the cases {\csc a} to {\csc d}, respectively. But $a'(l_k)=(k-1)(p-2)+2p\bigl((p-1)(q-1)-1\bigr)$, so if $t$ is the number of negative conjugates in $W$ we have $$ \displaylines{ (k-1)(p-2)+t(p-2)+t(p+2)\geqslant(k-1)(p-2)+2p\bigl((p-1)(q-1)-1\bigr)\cr \lwd{$\Rightarrow$}t\geqslant(p-1)(q-1)-1.\cr } $$
Suppose $t=(p-1)(q-1)-1$. Then there are no steps of type {\csc c}, since each negative conjugate must increase $a'$ by the full $(p+2)$. Any initial steps of type {\csc a} must be multiplication of the word in $G$ by $x_0$, since $a'$ must increase by $(p-2)$ each step and the only way to do this is to increase $a$ by the largest possible amount. But then the first step of type {\csc b} or {\csc d} must {\it reduce} $a'$ by $p$ at the boundary between $W_{i-1}$ and $\yconj i$ (see the case labelled (ii) above), which is a contradiction.
Hence $t\geqslant(p-1)(q-1)$, and we are done.\end{proof}
The theorem yields many other knots for which the absolute values of the natural framing number and the signature are
different. For instance, if $K$ is the $(3,7)$ torus knot then
$\nu(K)=-12$ and, according to \cite{Litherland}, $\sigma(K)=8$. In this case, $|\nu(K)|>|\sigma(K)|$. As M~Lackenby \cite{Lackenb} has pointed out, one can use this fact to construct non-prime knots with
$|\nu(K)|<|\sigma(K)|$. For instance, for $K=T(3,7)\#T(2,-13)$ we have $\nu(K)=12-12=0$ and $\sigma(K)=-8+12=4$.
If our conjecture on the natural framing of ${4ml+1\over 2l}$--two bridge knots is true, then we
obtain many more knots $K$ with $|\nu(K)|\neq|\sigma(K)|$. For instance, for $m=l=2$ we would have that the knot $7_4$ has natural framing $-4$, whereas its signature is $2$ (see the table in \cite{BuZi}). Still, it is reasonable to expect that the natural framing number has some properties similar to the signature:
{\bf Questions}\stdspace(1)\stdspace Is $\nu(K)$ even if $K$ is atoroidal, or if $K$ is a two-bridge knot?
(2)\stdspace Is the natural framing number of slice knots always zero? Is the natural framing number a concordance-invariant?
(3)\stdspace Is $2 u(K)\geqslant |\nu(K)|$, where $u$ is the unknotting number? Is $|\nu(K)|\geqslant|\sigma(K)|$ if $K$ is a prime knot?
Is $|\nu(K)|\leqslant 2g(K)$, where $g(K)$ is the four-ball genus? We conjecture that the answer to all three questions is No.
(4)\stdspace Is there a finite algorithm to compute the natural framing number of a given knot? What about two-bridge knots?
\section{A table}
The following table contains all we know about the natural framing numbers of prime knots with up to seven crossings. For each of these knots it states the range within which the natural framing could possibly lie, and the value which we conjecture. For the convenience of the reader, we have added columns for invariants which may be related to the natural framing: the signature (taken from \cite{BuZi}), the blackboard-framing of an alternating diagram (see \cite{Murasugi}), and the unknotting number (taken from \cite{KirbyPr}).
$$ \begin{array}{cccccc} {\rm knot} & \rlap{\kern-3mm{\rm natural framing}} & & {\rm signature} & {\rm alternating} & {\rm unknotting}\\
& {\rm range} & {\rm conjecture} & & {\rm diagram} & {\rm number}\\ 3_1 & 2 & & -2 & 3 & 1\\ 4_1 & 0 & & 0 & 0 & 1\\ 5_1 & 4 & & -4 & 5 & 2\\ 5_2 & [0,2] & 2 & -2 & 5 & 1\\ 6_1 & 0 & & 0 & 2 & 1\\ 6_2 & [0,2] & 2 & -2 & 2 & 1\\ 6_3 & 0 & & 0 & 0 & 1\\ 7_1 & 6 & & -6 & 7 & 3\\ 7_2 & [0,2] & 2 & -2 & 7 & 1\\ 7_3 & [0,4] & 4 & -4 & 7 & 2\\ 7_4 & [0,4] & 4 & -2 & 7 & 2\\ 7_5 & [0,4] & 4 & -4 & 7 & 2\\ 7_6 & [0,2] & 2 & -2 & 3 & 1\\ 7_7 & 0 & & 0 & 1 & 1\\kern.2em\end{array} $$
\begin{proof} $3_1$, $5_1$ and $7_1$ are torus knots. $4_1$ and $6_1$ belong to the family of twist knots with natural framing $0$. $6_3$ is amphichiral. It is easy to find compressing disks of the knots $5_2$, $7_2$, $7_3$, $7_4$, $7_5$ and $7_6$ with no ribbons, no negative clasps, only the appropriate number of positive clasps. (Note that the values for $5_2$, $7_2$ and $7_4$ have been conjectured in section 1.) The only cases where the compressing disks are not easy to imagine are $6_2$ and $7_7$.
\begin{figure}
\caption{The knots $6_2$ and $7_7$}
\end{figure}
We see that $6_2$ has longitude $$l=a^*\ bABaBAbABa\ b^*\ aBAbABaBAb $$ (where we write $A$ and $B$ for $a^{-1}$ and $b^{-1}$, respectively);
we can write this as
\def\oo#1#2{#2}
$$ \oo{aa}{a^2}\ bA{\underline B}aBAbA {\underline B}a\kern.2em\oo{BB}{b^{-2}}\ a{\underline B}AbABa{\underline B}Ab. $$ If we leave out the four underlined letters we obtain the trivial word. This proves that $n(4)=4$. On the other hand, we can write the longitude as $$ \oo{aaa}{a^3}\kern.2em{\underline b}ABa{\underline B}AbABa\kern.2em\oo{b}{b^1}\kern.2em{\underline a}BAbABa{\underline B}Ab, $$ so $n(0)\leqslant 4$. It follows that $0\leqslant\nu(6_2)\leqslant2$.
The knot $7_7$ has longitude $$ l=a^*\ bABaBAbaBabAbaBAbABa\ b^*\ aBAbABabAbaBabABaBAb. $$ We can write this as $$ \oo{aaaa}{a^4}\ bABaBAb{\underline a}BabAbaBA{\underline b}AB {\underline a}\kern.2em\oo{bb}{b^2}\kern.2em{\underline a}BA{\underline b}ABabAbaB{\underline a}bABaB Ab. $$ Leaving out the six underlined letters yields the trivial word, which proves that $n(-6)=6$. Also, we can write the longitude as $$ a^{-6}\ b{\underline A}BaB{\underline A}ba{\underline B} ab{\underline A}ba{\underline B}AbA{\underline B}a\kern.2em
b^{-6}
\ a{\underline B}AbA{\underline B}ab{\underline A}ba {\underline B}ab{\underline A}BaB{\underline A}b, $$
so $n(12)=12$. It follows that $\nu(7_7)=0$.\end{proof}
{\bf Remark}\stdspace These compressing disks were found using box-diagrams (see \cite{nat3}). \nl
{\bf Acknowledgements}\stdspace The authors thank their respective PhD advisors Brian Sanderson and Colin Rourke for their help and enthusiasm. M.T.G.\ was sponsored by EPSRC, B.W.\ by a University of Warwick Graduate Award.
\end{document} | arXiv |
\begin{document}
\title {\bf{A Family of Quasimorphism Constructions}}
\author{Gabi Ben Simon \\ ETH-Z\"urich\\
[email protected]} \maketitle
\begin{abstract}
In this work we present a principle which says that quasimorphisms can be obtained via "local data" of the group action on certain appropriate spaces. In a rough manner the principle says that instead of starting with a given group and try to build or study its space of quasimorphisms, we should start with a space with a certain structure, in such a way that groups acting on this space and respect this structure will automatically carry quasimorphisms, where these are suppose to be better understood. In this paper we suggest such a family of spaces and give demonstrating examples for countable groups, groups that relates to action on the circle as well as outline construction for diffeomorphism groups. A distinctive advantage of this principle is that it allows the construction of the quasimorphism in a quite direct way. Further, we prove a lemma which besides serving as a platform for the construction of quasimorphisms on countable groups, bare interest by itself. Since it provides us with an embedding of any given countable group as a group of quasi-isometries of a universal space, where this space of embeddings is in bijection with the projective space of the homogeneous quasimorphism space of the group.
\end{abstract}
\section{Introduction}
Given a group, $G$, a \textit{quasimorphism} on the group, $\mu$, is a function to $\mathbb{R}$ which satisfies
$$|\mu(xy)-\mu(x)-\mu(y)| \leq B$$ for all $x, y \in G$
and a universal $B$. The homogenization of $\mu$, $\mu^{h}(g):= \lim \limits_{n \rightarrow \infty} \frac{\mu(g^{n})}{n}$,
where the limit exists, is a qusimorphism with bounded difference from $\mu$. Further $\mu^h$, is a homogeneous function which means $\mu^{h}(g^n)=n \mu^{h}(g)$ for every integer $n$.
From the point of view of the author's interest there are two main sources of study of this notion. The one that comes from the attempt to construct quasimorphisms on diffeomorphisms group, of special interest symplectomorphisms and Hamiltonian groups (see for example \cite{BS}, \cite{EnP}, \cite{Py}, \cite{Shel}). Where the other source comes from the study of Lie groups and, with and without relation, countable groups. So the groups studied are, for example, universal covers of hermitian Lie groups and word hyperbolic groups (see for example \cite{BS-H3}, \cite{BS-H1}, \cite{BM}, \cite{C}, \cite{CF}, \cite{BIW}, \cite{EP} and \cite{calegari} as a general reference). In this work we want to report a feature, which appears sometimes indirectly, which says that quasimorphisms can be obtained via "local data" of the group action on certain appropriate spaces. This feature appears in both families mentioned above. In a rough manner the principle says that instead of starting with a given group and try to build or study its space of quasimorphisms, we should start with a space with a certain structure, in such a way that groups acting on this space and respect this structure will automatically carry quasimorphisms, where these should be quite understood. In this paper we suggest such a family of spaces and give demonstrating examples for countable groups, groups that relates to action on the circle as well as outline construction for diffeomorphism groups, see section 3. A distinctive advantage of this principle is that it allows the construction of the quasimorphism in a quite direct way. Further, see subsection \eqref{ladder}, we prove a lemma which besides serving as a platform for the construction of quasimorphisms on countable groups, bare interest by itself. Since it provides us with an embedding of the countable group as a group of quasi-isometries of a universal space, where this space of embeddings is in bijection with the projective space of the homogeneous quasimorphism space of the group. We see this paper as a first step, in developing the picture that emerge from it.
We should remark that the idea to look at quasimorphisms from the point of view of group action on spaces with certain, appropriate, structure has started in \cite{BS-H4}. Nevertheless the focus there was completely different. Indeed the focus was on a systematic study of the relation of quasimorphisms and the notion of relative growth and order structures on groups, see \cite{BS-H4}.
\textbf{Acknowledgements:}
Many thanks to Danny Calegari for his willingness to host me at Cambridge University given a very short notice. The discussion with him was important for this work. Many thanks to Tobias Hartnick for reading a draft of the paper and for his remarks. Many thanks to Andreas Leiser for the help with the drawings. Many thanks to Leonid Polterovich for his important remarks about the preliminary version. The remarks of Dietmar Salamon about the diffeomorphism group helped me to focus my intension and ideas I thank him very much for that. Finally I am grateful to the Departement Mathematik of ETH Z\"urich for the support during this academic year, and in particular to Paul Biran and Dietmar Salamon.
\section{Main Principle}
The starting point is the following simple fact that can be extracted from $ \cite {BS-H4}$.
\begin{lemma}\label{basiclemma} Assume that $X$ is a space such that there exists a function $h: X \rightarrow \mathbb{R} $ and a group action
$G \mathrel{\reflectbox{$\righttoleftarrow$}} X$ such that for any $ x,y \in X, g \in G $ we have:
\begin{equation} \label{root condition} |(h(g \cdot x)-h(g \cdot y))-(h(x)-h(y))| \leq B \end{equation}
for some universal bound B .
Then the function $ \mu (g)= h(g \cdot a)- h(a) $ is a quasimorphism, where $\mu$ does not depend on the choice of $a \in X $ up to a bounded error. Further, if the action is effective and is not bounded in the sense that
$ \lim \limits_{n \to \pm \infty} h(g^n \cdot a) = \pm \infty $ for some $g$, then $\mu ^h$, the homogenization of $ \mu$ is a non zero homogeneous quasimorphism.
\end{lemma}
\begin{remark}:\label{br} Actually, as it shown in \cite{BS-H4} every homogeneous quasimorphism can be obtained in this way by simply choosing $X=G$ and $h$ to be the given quasimorphism on $G$.
\end{remark}
The lemma suggests a usage of the "inverse ideology", meaning: start with a space $h: X \rightarrow \mathbb{R} $ and try to find a group action $G \mathrel{\reflectbox{$\righttoleftarrow$}} X $ which satisfies condition \eqref{root condition} above. The paper suggests one possible answer to this problem.
We now give a setup which will lead to examples.
Let $X$ be a space with no special structure. And assume that a group $ A $ acts on $X$ such that \footnote{Choosing transparency over conciseness, we choose to skip what might be a more concise formulation, using standard terminology, of the axioms.}
$ \begin{cases}\label{mainsetup}
X= \coprod \limits_{ \alpha \in A } F_{ \alpha } & (X \text{ is a union of "fundamental domains"})\\
\alpha : F_{ \mathbbm{1}} \rightarrow F_{ \alpha }, & \forall \alpha \in A \\
h: X \rightarrow \mathbb{R} , & \text{ s.t. } Im(h(F_{\mathbbm {1}})) \subseteq [0,1), \\
h( \alpha (x))=h(x)+ \rho (\alpha)+b(x, \alpha), & \forall x \in F_{ \mathbbm{1}}, \forall \alpha \in A
\end{cases} $
\noindent where the restriction of $\alpha$ above is a bijection, $h$ a function on $X$, $b: F_{ \mathbbm{1}} \times A \rightarrow \mathbb{R}$ is a bounded function and, $\rho$ an unbounded homomorphism to $\mathbb{Z}$.
\begin{definition}\label{triple} We define $X$, $h$ and $A$ compatible as above as a triple, and denote it by $( X, h , A )$. We do not include $b$ and $\rho$ in the notation since in all that follows they will not be used directly. \end{definition}
\begin{theorem}\label{maintheorem} Assume that $G$ acts on the space $( X, h, A )$ such that the action of $G$ commutes with the action of $A$ and, $h(g(F_{ \mathbbm{1}})) \subseteq [r, r+C_{0}]$ for all $g \in G$, and
for universal constant $C_{0}$, and some $r$ which depends on $g$. For example we can choose $ r:= \inf(Im(h(g(F_{}))) $.
Then $\mu$ of Lemma \ref{basiclemma} defines a quasimorphism.
\end{theorem}
\begin{remark}\label{ac}
1) The assumption that $A$ and $G$ need to commute can be relaxed. Instead we can demand that
the groups actions will \textit{ almost commute}, which means that for all $\alpha \in A$ and $ g \in G$ we have that $| h(\alpha \circ g\cdot x_{0})-h(g \circ \alpha\cdot x_{0})|$ is universally bounded independently of $x_{0}$. The proof goes exactly the same.
2) Note that due to the facts that $X$ is "tiled" by images of $F_{ \mathbbm{1}}$ under the action of $A$, and $A$ commutes with $G$, it is enough to construct, or define, the image of $F_{ \mathbbm{1}}$ under
the restriction of the action of $G$ to $F_{ \mathbbm{1}}$.
\end{remark}
\begin{proof} The proof is made, essentially, of three simple facts: The first thing to note is that for every $\gamma \in A$ we have that $h(F_{\gamma}) \subseteq [r, r+M_{0}]$ for some $r$ and the universal constant $M_{0}$ which bounds $b$. Indeed we have $h(F_{\gamma})=h(\gamma (F_{\mathbbm{1}}))$ and it follows from the last axiom that \begin{equation}\label{p1} h( \gamma \cdot x)=h(x)+ \rho (\gamma)+b(x, \gamma) \end{equation} so the claim follows since $b$ is bounded and of course the bounds do not depend on $\gamma$. The second thing to note is that it follows from the axioms that for all $g \in G,\text{ } \gamma \in A, \text{ } x \in X$ we have that \begin{equation}\label{p2} |h(g \circ \gamma \cdot x)-(h(g\cdot x)+ \rho (\gamma)+b)|\end{equation} is universally bounded (actually by $2M_{0}$ ) where we use \eqref{p1} to see it.
The third thing to notice is that it follows from the assumption that $(h(g(F_{ \mathbbm{1}})) \subseteq [r, r+C_{0}]$ and the fact that the actions of $A$ and $G$ commute, that for all $\alpha \in A$ we have \begin{equation}\label{p3} (h(g(F_{ \alpha})) \subseteq [r, r+C_{0}]\end{equation} for some $r$.
So now let $x,y \in X \text {and }g\in G$. Further assume that $x \in F_{\alpha}$ and $y \in F_{\beta}$ so of course we have $\beta\circ \alpha^{-1} \cdot x \in F_{\beta}$. We now estimate
$$ |(h(g \cdot x)-h(g \cdot y))-(h(x)-h(y))|=$$
$$ |(h(g \cdot x)-h(g\circ \beta \alpha^{-1}\cdot x)+
h(g\circ \beta \alpha^{-1}\cdot x)
-h(g \cdot y)+ h(\beta \alpha^{-1}\cdot x)-h(x)+h(y)- h( \beta \alpha^{-1}\cdot x)| $$
\begin{align}\label{b1} \leq & |(h(g \cdot x)-h(g\circ \beta \alpha^{-1}\cdot x) +h(g\circ \beta \alpha^{-1}\cdot x)\\\label{b2} & \nonumber-h(g \cdot y)+ h(\beta \alpha^{-1}\cdot x)-h(x)| \\ &+ |h(y)- h( \beta \alpha^{-1}\cdot x)| \leq
\end{align}
\begin{align} \label{t1} & |(h(g \cdot x)-(h(g \cdot x)+\rho( \beta \alpha^{-1})+b) +h(g\circ \beta \alpha^{-1}\cdot x)\\ \label{t2} & \nonumber-h(g \cdot y)+ (h(x)+\rho( \beta \alpha^{-1})+b)-h(x)| \\ &+ |h(y)- h( \beta \alpha^{-1}\cdot x)|+2M_{0} \leq\\ \label{t3} & | h(g\circ \beta \alpha^{-1}\cdot x)-h(g \cdot y) | \\ \label{t4} &+ |h(y)- h( \beta \alpha^{-1}\cdot x)|+4M_{0} \leq 4M_{0}+1+C_{0} \end{align}
The transition from \eqref{b1} and \eqref{b2} to \eqref{t1} and \eqref{t2} comes from the second and the first properties above (which come from \eqref{p2} and \eqref{p1} respectively). Further, the transition to \eqref{t3} and \eqref{t4} comes from the fact that $b$ is globally bounded and from the third property above \eqref{p3}. This concludes the proof.
\end{proof}
\section{Examples demonstrating the main principle}
\textit{From now on we set: $A= \mathbb{Z} $ and $ \rho = id$, unless otherwise stated. We use the same notation and conventions as above.}
We now move to the next family. Using theorem \eqref{maintheorem} with an eye to countable groups.
\subsection{Family 1: Countable Groups}
\subsubsection{Example}
It follows almost directly from the results of \cite{BS-H4} that the Rademacher quasimorphism on $PSL_{2}( \mathbb{Z})$ fits into the scheme of Theorem \eqref{maintheorem}. Here again we have $A = \mathbb{Z} $ and $X$ is a countable subsets of $ \mathbb{R}^2 $ and $h: X \rightarrow \mathbb{R}$. Further, we have an action of $ \mathbb{Z}$ on $X$ which almost commutes (see Remark\eqref{ac}) with the action of $PSL_{2}( \mathbb{Z})$ on $X \subseteq \mathbb{R}^2$. The resulting quasimorphism, as we said, is a Rademacher quasimorphism. The details are as follows, we repeat the data of \cite{BS-H4} in which the full details of the construction appear. Recall that $PSL_{2}( \mathbb{Z}) \cong \mathbb{Z}_2 \ast \mathbb{Z}_3$. Under this isomorphism we denote by $S$ and $R$ the generators of $\mathbb{Z}_2$ and $ \mathbb{Z}_3$ respectively. In figure \ref{fig2} below we demonstrate how the group acts on a subset, $X$ (where of course only part of it appears in the figure), of the plane where the action is obvious from the figure. Here $h$ increases in the horizontal direction- again see the figure. Lastly, between the two dotted lines we have one fundamental domain of the action.
\begin{figure}
\caption{The $PSL_2(\mathbb{Z}$) example}
\label{fig2}
\end{figure}
\subsubsection{Universal embedding for Countable Groups with nonzero quasimorphism}\label{ladder}
Further, the following set up, culminating in the lemma below, also serve as a platform for building examples, by applying Theorem \eqref{maintheorem}.
Consider a discrete countable set on the open interval $(0,1)$ denoted by $H$. And consider the "ladder" set $ \mathcal{L}:= H\times \mathbb{Z} \subseteq \mathbb{R}^{2}.$ Define a metric on $\mathcal{L}$ as $d=d_{1}+d_{2}$ where the $d_{i}$ are the usual induced metrics on $H$ and $\mathbb{Z}$ respectively. So we have a metric space $(\mathcal{L}, d)$. Lets denote by $h: \mathcal{L} \rightarrow \mathbb{Z}$ the projection to the $\mathbb{Z}$ component. Finally, we denote by $QI^{h}(\mathcal{L},d)$ the space of all quasi-isometries of $(\mathcal{L},d)$ which respect condition \eqref{root condition} where $g$, in \eqref{root condition} stands for quasi-isometry and $B$ depends on $g$.
Now let $G$ be any countable group for which the space of nonzero homogenous quasimorphism is not empty. Then we have:
\begin{lemma}\label{qil}For a given homogeneous quasimorphism on $G$, say $\mu$, there is an injection of $G$ induced by $\mu$, into $QI^{h}(\mathcal{L},d)$. Further, if we denote the action of the image of $G$ on $\mathcal{L}$, induced by $\mu$, by $\Psi^{\mu}$ and two representations are considered to be equivalent if the difference, with respect to $h$, between the orbits of their action, for any point in $\mathcal{L}$, is universally bounded. Then, there is an injection of the projective space of the homogenous quasi-morphisms space into the equivalence classes space of the representations. In other words if $\mu$ and $m$ are linearly independent then $[\Psi^{\mu}] \neq [\Psi^{m}]$, where the brackets stand for equivalence classes.
\end{lemma}
\textbf{Remark.} It is worthwhile to emphasize what are the lemma's main points.
1. The space $\mathcal{L}$ is quasi-isometric to $ \mathbb{Z}$, nevertheless the level sets of $h$ on $\mathcal{L}$ plays a very important role in the embedding of $G$ above, so they can not be discarded.
2. The lemma says that \textbf{any} quasimorphism on any countable group comes from an injection of the group into $QI^{h}(\mathcal{L},d)$. It further tells us that two injections will be essentially the same if their quasimorphisms are.
3. Note that actually any group in $QI^{h}(\mathcal{L},d)$, countable or not countable, for a \textit{fixed} constants in the quasi-isometry condition will cary a quasimorphism. By using \eqref{basiclemma} . So it means that such a group has at least as many embeddings into $QI^{h}(\mathcal{L},d)$, as the points of the projective space of its homogeneous quasimorphism space. For example: For $SL_{n}( \mathbb{Z})$ for $n \geq 3$, being a boundedly generated group by its elementary subgroups, in which every element can be considered as a commutator, it is easily follows that for any injection of this group to $QI^{h}(\mathcal{L},d)$ we will have only bounded orbits. This is of course not a surprise knowing that the group has no non trivial homogeneous quasimorphisms.
\begin{proof}
Since $\mu$ is a nonzero homogeneous quasimorphism then we can choose a quasimorphism $\mu_0: G \rightarrow \mathbb{R}$ with the following properties: (i) The homogenization of $\mu_0 \text { is } \mu $, (ii) $ \mu^{-1}_{0}(n) \neq \emptyset \text{ } \forall n \in \mathbb{Z}$,\newline (iii) $\mu_0(\mathbbm{1}_G)=0$ and (iv) $\mu_0$ has values only in $\mathbb{Z}$. For such $\mu_0$ denote $\mu_0^{-1}(n):= G_n(\mu)$ which is of course countable set. In particular we have $\coprod \limits_{n \in \mathbb{Z}} G_{n}(\mu)= G$.
Now for each $n \in \mathbb{Z}$ biject $G_{n}(\mu)$ with $H \times \{n\} \subset \mathcal{L}$. As we will see below, we can in-fact assume without loss of generality, following the countability of $G$, that $G_{n}(\mu)$ is infinite. So we now have identified $G$ with $\mathcal{L}$. Denote by $\Psi^{\mu}$ the bijection between $G$ and $\mathcal{L}$. We will denoted the left action of the group on itself by $l_g$ for the action of $g$, and the induced action on $\mathcal{L}$ by $\tilde{l}_g$. Following remark \eqref{br} we know that $\mu_0$ and $G$ satisfy condition \eqref{root condition} of lemma \eqref{basiclemma} with some bound $B$. Further, by definition the left action of $g$ on $\Psi^{\mu}(x)$ for some $x \in G$ is $\Psi^{\mu}(l_g\cdot x)$.
By construction we have for any $x\text{, }y\text{, }g \in G$: \begin{equation}\label{seq}|\mu_0(l_g \cdot x)-\mu_0(l_g \cdot y)|= d_2(\Psi^{\mu}(l_g \cdot x), \Psi^{\mu}(l_g \cdot y))= d_2(\tilde{l}_g \cdot \Psi^{\mu}( x),\tilde{l}_g \cdot \Psi^{\mu}( y)) \end{equation}
Combining \eqref{seq} and \eqref{root condition} (implemented to $\mu_0$ and $G$ for the bound $B$) we get
\begin{equation}\label{eeq} | d_2(\tilde{l}_g \cdot \Psi^{\mu}( x),\tilde{l}_g \cdot \Psi^{\mu}( y)) - d_2(\Psi^{\mu}( x), \Psi^{\mu}( y)) | \leq B \end{equation}
Now since the metric $d$ is bounded by 1 from $d_2$ we get $$ | d(\tilde{l}_g \cdot \Psi^{\mu}( x),\tilde{l}_g \cdot \Psi^{\mu}( y)) - d(\Psi^{\mu}( x), \Psi^{\mu}( y)) | \leq B+2$$
which means that $G$ acts on $(\mathcal{L},d)$ via quasi-isometries. Now, by simply following the construction we see that actually the image of $G$ is in $QI^{h}(\mathcal{L},d)$. To see the second part of the lemma we observe that we can reconstruct $\mu$ from $\Psi^{\mu}$ by choosing any point $x \in \mathcal{L}$ and simply iterate the values of $h$ on the induced action of the group element on the chosen point (this can be seen transparently by following the construction). In particular the value does not depend on a choice of representative (since the bounded error will vanish in the homogenization). Summing up we have an injection as required.
\end{proof}
\subsection{Family 2: Quasimorphisms that comes from monotonicity of $h$}
We now move to the next example which we give mainly for the sake of completeness of the scope of the examples. Here we use Theorem \eqref{maintheorem} a bit differently, we have the following set up.
Assume that a set $G \subseteq Aut(X)$ acts on $(X, h, A)$ (see definition \eqref{triple}) s.t. \newline a) $ \forall g \in G , x, y \in X $, $ h(gx) \geq h(gy) \iff h(x) \geq h(y) $ \newline b) Elements of $G$ preserve the level sets of $h$.\newline c) The error function $b$ equals zero. Then,
\begin{theorem} Under the above assumptions \label{arot}$G$ is a group. The action of $G$ on the triple satisfies all the condition of Theorem \eqref{maintheorem} so lemma \eqref{basiclemma} applied to this case gives that $ \mu$ is a quasimorphism on $G$. Further for the case $A= \mathbb{Z}$, $\mu$ comes from an action on the circle. \end{theorem}
\begin{proof}
We first remark, that it is very simple to define $G \subseteq Aut(X)$ such that all the assumptions are kept but the preservation of the level sets of $h$, and $G$ will not be group. This being said, once we add the assumption then trivially we have that $G$ is a group. Still, in order to use Theorem \eqref{maintheorem} we must show that for all $g\in G$ we have $h(g(F_0)) \subseteq [r, r+C_0]$ for some $r$, which depends on $g$, and $C_0$ which is not. The proof for $A=\mathbb{Z}$ and the general case is the same so we will give a proof for $A=\mathbb{Z}$. We will see that $C_0=1$. We first recall that any element of $G$ is monotone with respect to $h$ in the sense above. Let us denote $r+j := \inf \{h(g(F_0) \cap F_j)\}$ for some $0 \leq r < 1$ and some $j \in \mathbb{Z}$. So we need to show that \begin{equation}\label{ub} h(g(F_0)) \subseteq [r+j, r+j+1] \end{equation} Assume not, let $z \in F_0 \text{ },a \in X$ be such that $g(z)=a$ and $h(a) >r+j+1$. Write\footnote{For simplicity of notation, in this proof, we will denote by $\alpha_k$ the k-action of $k \in \mathbb{Z}$.} $\alpha_{\mathbbm{1}}(b)=a$ for $b \in F_j$. We claim that $b$ is not covered by $g(F_0)$. If so, let $x \in F_0$ such that $g(x)=b$ Applying $\alpha_{\mathbbm{1}}$ both sides we get $\alpha_{\mathbbm{1}} g(x)= \alpha_{\mathbbm{1}}(b) \Leftrightarrow g(\alpha_{\mathbbm{1}}(x))=a$ which means (recall that $g$ is a bijection) $\alpha_{\mathbbm{1}}(x) \in F_\mathbbm{1}$ is the pre-image of $a$. This is a contradiction. Thus $b$ is not covered by $F_0$.
So $b$ must be covered by $F_{-\mathbbm{1}}$. On the other hand $h(b)>r+j$. This means that there is an element say $k$ in $g(F_0)\cap F_j$ such that $r+j \leq h(k) <h(b)$. But $k$ came from $F_0$, while $b$ is covered from $F_{-\mathbbm{1}}$. This contradicts the monotonicity of $g$ with respect to $h$. Thus \eqref{ub} is proved, which is the main condition of Theorem \eqref{maintheorem}.
To show that our quasimorphism comes from an action on the circle we argue in a standard way. Let us denote by $\mathcal{F}_k := h(F_k)$ and define the subset $\mathcal{F} := \bigcup_{k \in \mathbb{Z}} \mathcal{F}_k \subseteq \mathbb{R}$. Since $G$ preserve the level sets there is an induced action of $G$ on $ \mathcal{F}$, and further since, by assumption the bounded error function $b$ is zero it follows that there is a translation action of $\mathbb{Z}$ action. Lastly the action of $G$ on $\mathcal{F}$ is monotone. It is a standard fact that such an action extends to the whole real line. We thus get that the image of $G$ is in $Homeo^+_\mathbb{Z}(\mathbb{R})$, the group of monotone orientation preserving homeomorphisms of the real line. Now noting that the constructed homogeneous quasimorphism comes from iterating the values of $h(x_0)$ with respect to the action of the group element we get that this is exactly the translation number defined on $Homeo^+_\mathbb{Z}(\mathbb{R})$, in particular the quasimorphism comes form an action on the circle.
\end{proof}
\section{Diffeomorphism Groups and Discussion}
We would like to state the following remarks relating to subsequent research and diffeomorphism groups.
As for diffeomorphism groups we propose the following set-up.
Assume $X= \coprod \limits_{n \in \mathbb {Z} }F_{n} \subseteq \mathbb{R}^{k+1}$ is a smooth $k$ manifold, path connected, unbounded. In particular, carries an induced Riemannian metric, so we have a notion of length of paths, and volume on $X$. Assume also that the closure
of $F_{n}$ intersect only the closure of $F_{n-1}$ and $F_{n+1}$ and the action of $\mathbb{Z}$ on $X$ is via isometries (recall that the action of $n$, $ \alpha_{n}$, satisfies that $ \alpha_{n} : F_{0} \xrightarrow {\cong} F_{n}$). Further we choose $U$ to be an open submanifold of $F_0$ which exhaust the (finite) volume of $F_0$ up to a given constant and has the same (finite) diameter as $F_0$. Finally, choose $U$ such that it has a smooth boundary.
Now let $G' \subseteq Diff_\mathbb{Z}(X)$, $h_{0}: X \rightarrow \mathbb{R}^{\geq 0}$ be a smooth, say, function s.t. for all $g \in G'$ and fixed $0< \varepsilon$
\begin{equation} | \int \limits_{g(F_{0})}h_{0}dm | \leq B\end{equation}
\begin{equation} \varepsilon < \inf \limits_{x \in Int(g(U)} \sup \{vol(B_{r}(x)) | \widebar {B_{r}(x)} \subseteq \widebar {g(U)}, B_{r}(x) \text { } is \text { } an\text{ } open \text { }ball \text { } \underline{contains} \text { }x \} \end{equation}
For example: $h_{0} $ can be taken to be periodic with respect to the $ F_{n}'s $
so the elements of $G'$ respect the period up to some error where the simplest example is when $h_{0}$ is just a constant function.
We define $h: X \rightarrow \mathbb{R}$ as follows: we choose a reference point $x_{0} \in U$ and define
\begin{equation} h(x)=\pm \inf \limits_{\gamma} \{ | \int \limits_{\gamma} h_{0} dx | ; \gamma \text{ is a path which connects } x_{0} \text{ } to \text{ } x \} \end{equation}
where $dx$ is length element of the metric and the sign is determined to be compatible with the labeling of the fundamental domains. For "simple enough" fundamental domain $F_0$ such an $h$ will make $(X, h, \mathbb{Z})$ into a triple in the sense of \eqref{triple}. Finally, note that the set of groups contained in $G'$ which satisfy the above conditions is not empty, since the translations group is such a group. Even further in some cases, as we saw in Example 1 of the previous section, we have a family of such groups. Thus by Zorn's lemma we have at least one maximal group $G$ with the conditions above. For such $G$ we have:
\begin{theorem}
For all $g \in G $ we have $h(g(U)) \subseteq [r, r+ C_{0}] $ where $C_{0}$ depends only on
$h$ and $\varepsilon$ and we iterate only the points of $U$. Thus the formula $ \mu (g)= h(g \cdot x_{0})-h(x_{0})$ defines a quasimorphism on $G$ and $ \mu ^{h}$ the homogenization of $\mu$ a non zero homogeneous quasimorphism on $G$.
\end{theorem}
The disadvantage here, is that we do not know what are the properties of $G$ and how much interesting it is, being determined in a crude way.
\textbf{2.} The basic picture that stems from lemma \eqref{qil}, according to \cite{Ha}, seems to relate to \cite{Ma} where the results there seem to relate to at least one aspect of implications of the lemma. In \cite{Ma}, quasimorphisms are indeed studied on finitely presented groups via, more or less, quasi-isometric action on trees. It seems that by using lemma \eqref{qil} as a starting point, and results of \cite{BS-H4}, new interpretation to \cite{Ma} can be given. An important point is that this work can also be used to study similar ideas where we consider countable groups into diffeomorphism groups using quasi isometries as carriers of data on quasimorphisms on the groups. I should mention that partial motivation could come from \cite{Polt} and mentioned works therein.
\
\end{document} | arXiv |
For questions concerning random matrices.
Let $U$ be a random $n \times n$ unitary matrix (w.r.t. the Haar measure) and let $M$ be a $k \times l$ submatrix. What is the distribution of the singular values of $M$?
What does this physics paper mean by having a matrix in a denominator?
Are these random matrix processes equivalent?
Total Variation Distance Between a Distribution and a perturbed distribution?
How to do change of variables of a j.p.d.f with N pdf(s)?
What is the product of two Haar distributed unitary matrices?
I guess a product of two Haar distributed unitary matrices is also a Haar distributed unitary matrix. Is there a proof?
What is the expectation of the rank of a matrix with a 1 at each column?
Do imaginary inverses of non-invertible matrices exist?
There isn't a real solution to $x^2 = -1$, but a complex solution $x = i$ exists. Similarly, does there exist a complex inverse of non-invertible matrices?
Bounding sub-Gaussian tail events by Gaussian tail events?
I have a naive question because it's mentioned in every random matrix paper and is not explained. What does it mean to say a random matrix has localized eigenvalues? And what are some examples of it?
Is there a distribution for random matrices which are constrained to have "unit vector" columns?
is random gaussian matrix invertible?
Is Gaussian Random Matrix invertible? I mean can we invert a Random Gaussian Square Matrix and also what is nature of its determinant, I mean to say whether determinant is zero or non zero?
Are the eigenvectors of real Wigner matrices made of independent random variables with zero-mean?
Why are independence and mean-zero necessary for the symmetrization lemma to hold?
How can the Wigner semicircle distribution go to zero? | CommonCrawl |
Monotone and oscillatory solution of $y^{(n)}+py=0$
by W. J. Kim PDF
Proc. Amer. Math. Soc. 62 (1977), 77-82 Request permission
Monotone and oscillatory behaviors of the solutions with the property that $y(x)/{x^2} \to 0$ as $x \to \infty$ or $y(x)/x \to 0$ as $x \to \infty$ are discussed. For example, it is shown that every nonoscillatory solution y, such that $y(x)/x \to 0$ as $x \to \infty$, monotonically tends to zero as $x \to \infty$, provided n is odd, $p \geqq 0$, and ${\smallint ^\infty }{x^{n - 1}}p(x)dx = \infty$.
G. V. Anan′eva and V. I. Balaganskiĭ, Oscillation of the solutions of certain differential equations of high order, Uspehi Mat. Nauk 14 (1959), no. 1 (85), 135–140 (Russian). MR 0102638
Philip Hartman and Aurel Wintner, Linear differential and difference equations with monotone solutions, Amer. J. Math. 75 (1953), 731–743. MR 57404, DOI 10.2307/2372548
Adolf Kneser, Untersuchungen über die reellen Nullstellen der Integrale linearer Differentialgleichungen, Math. Ann. 42 (1893), no. 3, 409–435 (German). MR 1510784, DOI 10.1007/BF01444165
V. A. Kondrat′ev, Oscillatory properties of solutions of the equation $y^{(n)}+p(x)y=0$, Trudy Moskov. Mat. Obš�. 10 (1961), 419–436 (Russian). MR 0141842
Walter Leighton, The detection of the oscillation of solutions of a second order linear differential equation, Duke Math. J. 17 (1950), 57–61. MR 32065
Zeev Nehari, Non-oscillation criteria for $n-th$ order linear differential equations, Duke Math. J. 32 (1965), 607–615. MR 186883
Zeev Nehari, Disconjugate linear differential operators, Trans. Amer. Math. Soc. 129 (1967), 500–516. MR 219781, DOI 10.1090/S0002-9947-1967-0219781-0
Zeev Nehari, Disconjugacy criteria for linear differential equations, J. Differential Equations 4 (1968), 604–611. MR 233006, DOI 10.1016/0022-0396(68)90010-7
T. T. Read, Growth and decay of solutions of $y^{(2n)}-py=0$, Proc. Amer. Math. Soc. 43 (1974), 127–132. MR 335948, DOI 10.1090/S0002-9939-1974-0335948-4
William Simons, Monotonicity in some nonoscillation criteria for differential equations, J. Differential Equations 13 (1973), 124–126. MR 333346, DOI 10.1016/0022-0396(73)90035-1
I. M. Sobol′, On the asymptotic behavior of the solutions of linear differential equations, Doklady Akad. Nauk SSSR (N.S.) 61 (1948), 219–222 (Russian). MR 0025650
C. A. Swanson, Comparison and oscillation theory of linear differential equations, Mathematics in Science and Engineering, Vol. 48, Academic Press, New York-London, 1968. MR 0463570
Aurel Wintner, A criterion of oscillatory stability, Quart. Appl. Math. 7 (1949), 115–117. MR 28499, DOI 10.1090/S0033-569X-1949-28499-6
Retrieve articles in Proceedings of the American Mathematical Society with MSC: 34C10
Retrieve articles in all journals with MSC: 34C10
Journal: Proc. Amer. Math. Soc. 62 (1977), 77-82
MSC: Primary 34C10 | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Minimum strongly biconnected spanning directed subgraph problem}
\author{Raed Jaberi}
\begin{abstract}
Let $G=(V,E)$ be a strongly biconnected directed graph. In this paper we consider the problem of computing an edge subset $H \subseteq E$ of minimum size such that the directed subgraph $(V,H)$ is strongly biconnected.
\end{abstract}
\begin{keyword}
Directed graphs \sep Connectivity \sep Approximation algorithms \sep Graph algorithms \sep strongly connected graphs
\end{keyword}
\end{frontmatter}
\section{Introduction}
In $2010$, Wu and Grumbach \cite{WG2010} introduced the concept of strongly biconnected directed graphs, A directed graph $G=(V,E)$ is strongly biconnected if $G$ is strongly connected and the underlying undirected graph of $G$ has no articulation point (Note that the underlying graph of $G$ is connected if $G$ is strongly connected). Let $G=(V,E)$ be a strongly biconnected directed graph. In this paper we consider the problem of computing an edge subset $H \subseteq E$ of minimum size such that the directed subgraph $(V,H)$ is strongly biconnected.
Observe that optimal solutions for minimum strongly connected spanning subgraph problem are not necessarily strongly biconnected, as shown in Figure \ref{figure:exampleoptimalsol}.
\begin{figure}
\caption{(a) A strongly biconnected directed graph. (b) An optimal solution for the minimum strongly connected spanning subgraph problem. (c) An optimal solution for the minimum strongly biconnected spanning subgraph problem. Note that this subgraph has strong articulation points but its ungerlying graph has no articulation points}
\label{figure:exampleoptimalsol}
\end{figure}
The minimum strongly connected spanning subgraph problem. is NP-complete\cite{G79}. Note that a strongly biconnected graph has a strongly biconnected spanning subgraph with $n$ edges if and only if it has a hamiltonian cycle. Therefore, the minimum strongly biconnected spanning subgraph problem is also NP-complete. Khuller et al. \cite{KRY94}, Zhao et al. \cite{ZNI03} and Vetta \cite{Vetta2001} provided approximation algorithms for The minimum strongly connected spanning subgraph problem. Wu and Grumbach \cite{WG2010} introduced the concept of strongly biconnected directed graph and strongly biconnected components. Strongly biconnected directed graphs and twinless strongly connected directed graphs have been received a lot of attention in \cite{WG2010,Botea2018,BoteaIJCAI2018,Botea2015,Raghavan06,Jaberi2019, Jaberi21,Jaberi2021,Jaberi01897,GeorgiadisandKosinas20,Jaberi09793,Jaberi03788,Jaberi47443,Jaberi2022}. Articulation points and blocks of an undirected graph can be calculated in $O(n+m)$ time using Tarjan's algorithm \cite{TAARJAN72,Schmidt2013}. Tarjan \cite{TAARJAN72} gave the first linear time algorithm for calculating strongly connected components. Pearce \cite{Pearce2016} and Nuutila et al. \cite{Nuutila1994} provided improved versions of Tarjan's algorithm. Efficient linear time algorithms for finding strongly connected components were given in \cite{Sharir1981,Gabow2000,CM96, Mehlhorn2017,DietzfelbingerMehlhornSanders2014}.
Strong articulation points and strong bridges can be computed in linear time in directed graphs using the algorithms of Italiano et al. \cite{ILS12,Italiano2010,FGILS2016}. The algorithms of Italiano et al. \cite{ILS12,Italiano2010} are based on a strong connection between strong articulation points, strong bridges and dominators in flowgraphs. Dominators can be found efficiently in flowgraphs \cite{AHLT99,BGKRTW00,GT16,GT05,LT79}. In the following section we consider the minimum strongly biconnected spanning subgraph problem.
\section{Approximation algorithms for the minimum strongly biconnected spanning subgraph problem}
In this section we study the minimum strongly biconnected spanning subgraph problem. The following lemma shows an obvious connection between the size of an optimal solution for the minimum strongly biconnected spanning subgraph problem and the size of an optimal solution for the minimum strongly connected spanning subgraph problem.
\begin{lemma} \label{def:sizeofsbss} Let $G=(V,E)$ be a strongly biconnected directed graph. Then the size of an optimal solution for the minimum strongly connected spanning subgraph problem is less than or equal to the size of an optimal solution for the minimum strongly biconnected spanning subgraph problem. \end{lemma} \begin{proof} Let $t$ be the size of an optimal solution for the minimum strongly connected spanning subgraph problem. By definition, every strongly biconnected spanning subgraph $G_{1}=(V,E_{1})$ of $G$ is strongly connected. Therefore, we have $\vert E_{1} \vert \geq t$. \end{proof} A strongly connected spanning subgraph with size $2(n-1)$ of a strongly connected graph $G=(V,E)$ can be constructed by computing the union of outgoing branching rooted at $v \in V$ and incoming branching rooted at $v$ (\hspace{1sp}\cite{FJ81,KRY94}). But this subgraph is not necessarily strongly biconnected.
Algorithm \ref{algo:approximationalgorithmforsbss} can compute a feasible solution for the minimum strongly biconnected spanning subgraph problem. \begin{figure}\label{algo:approximationalgorithmforsbss}
\end{figure} \begin{lemma} The output of Algorithm \ref{algo:approximationalgorithmforsbss} is strongly biconnected. \end{lemma} \begin{proof} Lines $1$--$9$ computes a strongly connected spanning subgraph $G_{v}=(V,E_{v})$ of $G$ since there is a path from $v$ to $w$ and a path from $w$ to $v$ for all $w \in V$. The while loop of lines $10$--$14$ removes all articulation points of the underlying graph of $G_{v}=(V,E_{v})$. \end{proof}
The following lemma shows that the approximation factor of Algorithm \ref{algo:approximationalgorithmforsbss} is $3$ \begin{lemma} \label{def:optsolution2esb} The number of edges in the output of Algorithm \ref{algo:approximationalgorithmforsbss} is at most $3(n-1)$. \end{lemma} \begin{proof} Lines $1$--$9$ computes a strongly connected spanning subgraph $G_{v}=(V,E_{v})$ of size $2n-2$ since each spanning tree has only $n-1$ edges. The while loop of lines $10$--$14$ removes all articulation points of the underlying graph of $G_{v}=(V,E_{v})$ by adding at most $n-1$ edges to the subgraph $G_{v}=(V,E_{v})$ because the number of strongly biconnected components of any directed graph is at most than $n$. \end{proof}
\begin{Theorem}
Algorithm \ref{algo:approximationalgorithmforsbss} runs in $O(nm)$ time. \end{Theorem} \begin{proof}
A spanning tree of a strongly biconnected graph can be constructed in $O(n+m)$ time using depth first search or breadth first search. Furthermore, lines $10$--$14$ take $O(nm)$ time since the number of strongly biconnected components of any directed graph is at most $n$.. \end{proof}
\begin{lemma} \label{def:optsolution2esb} Let $G=(V,E)$ be a strongly biconnected directed graph. Let $G_{1}=(V,E_{1})$ be the output of a $t$-approximation algorithm for the minimum strongly connected spanning subgraph problem for the input $G$. A strongly biconnected subgraph can be obtained from $G_{1}=(V,E_{1})$ with size at most $(1+t)h$ in polynomial time, where $h$ is the size of an optimal solution for the minimum strongly biconnected spanning subgraph problem. \end{lemma} \begin{proof} A strongly biconnected subgraph can be obtained from $G_{1}=(V,E_{1})$ by running the while loop of lines $10$--$14$ of Algorithm \ref{algo:approximationalgorithmforsbss} on $G_{1}=(V,E_{1})$. By lemma \ref{def:sizeofsbss}, the size of an optimal solution for the minimum strongly connected spanning subgraph problem is less than or equal to the size of an optimal solution for the minimum strongly biconnected spanning subgraph problem.
This while loop takes $O(nm)$ time. The while loop of lines $10$--$14$ adds at most $n-1$ edges to $G_{1}$. Clearly, each strongly biconnected spanning subgraph of $G$ has at least $n$ edges. \end{proof}
The following lemma shows an obvious connection between the size of an optimal solution for the minimum strongly biconnected spanning subgraph problem and the size of an optimal solution for the minimum 2-vertex connected spanning undirected subgraph problem.
\begin{lemma} \label{def:sizeofsbssbsubgraph} Let $G=(V,E)$ be a strongly biconnected directed graph. Then the size of an optimal solution for minimum 2-vertex connected spanning undirected subgraph problem is less than or equal to the size of an optimal solution for the minimum strongly biconnected spanning subgraph problem. \end{lemma} \begin{proof} Let $t$ be the size of an optimal solution for the minimum strongly connected spanning subgraph problem. By definition, the underlying graph of every strongly biconnected spanning subgraph $G_{1}=(V,E_{1})$ is biconnected. Therefore, we have $\vert E_{1} \vert \geq t$. \end{proof}
\begin{lemma} \label{def:optsolution2esb} There is a $17/6$ approximation algorithm for the minimum strongly biconnected spanning subgraph problem. \end{lemma} \begin{proof} Let $G=(V,E)$ be a strongly biconnected directed graph. A strongly biconnected subgraph can be obtained fromm $G$ by calculating a strongly connected spanning subgraph of $G$ and a biconnected spanning subgraph of the underlying graph of $G$. Moreover, let $h$ be the size of an optimal solution for the minimum strongly biconnected spanning subgraph problem. Let $i$ be the size of an optimal solution for the minimum strongly connected spanning subgraph problem and let $s$ be the size of an optimal solution for the minimum 2-vertex connected spanning undirected subgraph problem. By lemma \ref{def:sizeofsbss}, we have $h\geq i$. Moreover, by lemma \ref{def:sizeofsbssbsubgraph}, we have $h\geq s$. A feasible solution of size at most $(17/6)h$ for the minimum strongly biconnected spanning subgraph problem can be obtained by running Vetta's algorithm \cite{Vetta2001} on $G$ and the algorithm of Vempala and Vetta \cite{VV00} on the underlying graph of $G$.
\end{proof}
\section{Open Problems}
Results of Mader \cite{Mader71,Mader72} imply that the number of edges in each minimal $k$-vertex-connected undirected graph is less than or equal to $kn$ \cite{CT00}. Results of Edmonds \cite{Edmonds72} and Mader \cite{Mader85} imply that the number of edges in each minimal $k$-vertex-connected directed graph is at most $2kn$ \cite{CT00}. Jaberi\cite{Jaberi47443} proved that each minimal $2$-vertex-strongly biconnected directed graph has at most $7n$ edges. The proof in is based on results of Mader \cite{Mader71,Mader72,Mader85} and Edmonds \cite{Edmonds72}.
We leave as open problem whether the number of edges in each minimal strongly biconnected directed graph is at most $2n$ edges.
An important question is whether there are better algorithms for the problems in \cite{Jaberi03788,Jaberi47443,DietzfelbingerJaberi2015}.
\begin{thebibliography}{4} \bibitem {AHLT99} S. Alstrup, D. Harel, P.W. Lauridsen, M. Thorup, Dominators in linear time, SIAM J. Comput. $28$($6$) ($1999$) $2117$--$2132$. \bibitem{Botea2018}Adi Botea, Davide Bonusi, Pavel Surynek: Solving Multi-agent Path Finding on Strongly Biconnected Digraphs. J. Artif. Intell. Res.$ 62: 273$--$314 (2018)$ \bibitem{BoteaIJCAI2018}Adi Botea, Davide Bonusi, Pavel Surynek: Solving Multi-Agent Path Finding on Strongly Biconnected Digraphs (Extended Abstract). IJCAI $ 5563$--$5567(2018)$ \bibitem{Botea2015}Adi Botea, Pavel Surynek: Multi-Agent Path Finding on Strongly Biconnected Digraphs. AAAI $2015: 2024$--$2030$ \bibitem {BGKRTW00} A.L. Buchsbaum, L. Georgiadis, H. Kaplan, A. Rogers, R.E. Tarjan, J.R. Westbrook, Linear-time algorithms for dominators and other path-evaluation problems, SIAM J. Comput. $38$($4$) ($2008$) $1533$--$1573$.
\bibitem{CT00} J. Cheriyan, R. Thurimella,
Approximating Minimum-Size $k$-Connected Spanning Subgraphs via Matching. SIAM J. Comput. $30(2): 528$--$560 (2000)$ \bibitem{CM96} Joseph Cheriyan, Kurt Mehlhorn: Algorithms for Dense Graphs and Networks on the Random Access Computer. Algorithmica 15(6): $521$--$549(1996)$ \bibitem{DietzfelbingerMehlhornSanders2014} Martin Dietzfelbinger, Kurt Mehlhorn, Peter Sanders: Algorithmen und Datenstrukturen - die Grundwerkzeuge. eXamen.press, Springer $2014$, ISBN 978-3-642-05471-6, pp. I-XII, $1$--$380$ \bibitem{DietzfelbingerJaberi2015} Martin Dietzfelbinger, Raed Jaberi: On testing single connectedness in directed graphs and some related problems. Inf. Process. Lett. $115(9): 684$--$688 (2015)$
\bibitem{Edmonds72} J. Edmonds, Edge-disjoint branchings. Combinatorial Algorithms, pages $91$--$96$,
$1972$
\bibitem{FGILS2016} Donatella Firmani, Loukas Georgiadis, Giuseppe F. Italiano, Luigi Laura, Federico Santaroni: Strong Articulation Points and Strong Bridges in Large Scale Graphs. Algorithmica $74(3)$: $1123$--$1147 (2016)$ \bibitem {FJ81} G. N. Frederickson, J. JáJá, Approximation Algorithms for Several Graph Augmentation Problems. SIAM J. Comput. $10(2)$ $(1981)$ $270$--$283$.
\bibitem {FILOS12} D. Firmani, G.F. Italiano, L. Laura, A. Orlandi, F. Santaroni, Computing strong articulation points and strong bridges in large scale graphs, SEA, LNCS $7276$, ($2012$) $195$--$207$.
\bibitem{GeorgiadisKosinas20} Loukas Georgiadis, Evangelos Kosinas,
Linear-Time Algorithms for Computing Twinless Strong Articulation Points and Related Problems. ISAAC $2020$ $38:1$--$38:16$
\bibitem{G79} M. R. Garey, David S. Johnson:
Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman 1979, ISBN $0$--$7167$--$1044$--$7$ \bibitem {GT05} L. Georgiadis, R.E. Tarjan, Dominator tree verification and vertex- disjoint paths, In Proceedings of the 16th ACM-SIAM Symposium on Discrete Algorithms ($2005$) $433$--$442$.
\bibitem{GT16}Loukas Georgiadis, Robert E. Tarjan, Dominator Tree Certification and Divergent Spanning Trees. ACM Trans. Algorithms $12(1): 11:1$--$11:42 (2016)$ \bibitem{GeorgiadisandKosinas20} Loukas Georgiadis, Evangelos Kosinas,
Linear-Time Algorithms for Computing Twinless Strong Articulation Points and Related Problems. ISAAC $2020$ $38:1$--$38:16$ \bibitem{Gabow2000} Harold N. Gabow: Path-based depth-first search for strong and biconnected components. Inf. Process. Lett. 74(3-4): 107-114 (2000)
\bibitem{ILS12} G.F. Italiano, L. Laura, F. Santaroni,
Finding strong bridges and strong articulation points in linear time, Theoretical Computer Science $447$ ($2012$) $74$--$84$.
\bibitem{Italiano2010} G. F. Italiano, L. Laura, F. Santaroni, Finding Strong Bridges and Strong Articulation Points in Linear Time. COCOA $(1) 2010: 157$--$169$
\bibitem{Jaberi2021}Raed Jaberi, $2$-edge-twinless blocks,
Bulletin des Sciences Mathématiques, Volume $168$,
$102969$, ISSN $0007$--$4497$, $(2021)$, https://doi.org/10.1016/j.bulsci.2021.102969.
\bibitem{Jaberi21} Raed Jaberi,
Computing 2-twinless blocks, Discrete Mathematics Letters, $29$--$33$, Volume $5 (2021)$, DOI: 10.47443/dml.2020.0037
\bibitem{Jaberi47443} Raed Jaberi, Minimum 2-Vertex Strongly Biconnected Spanning Directed Subgraph Problem Discrete Mathematics Letters, $40$--$43$, Volume $7(2021)$ DOI: 10.47443/dml.2021.0024
\bibitem{Jaberi2019} Raed Jaberi,
Twinless articulation points and some related problems. CoRR abs/1912.11799 $(2019)$
\bibitem {Jaberi01897} Raed Jaberi, b-articulation points and b-bridges in strongly biconnected directed graphs. CoRR abs/2007.01897 $(2020)$
\bibitem{Jaberi09793} Raed Jaberi, 2-blocks in strongly biconnected directed graphs. CoRR abs/2007.09793 $(2020)$ \bibitem{Jaberi03788} Raed Jaberi, Minimum $2$-vertex-twinless connected spanning subgraph problem. CoRR abs/2001.03788 (2020) \bibitem{Jaberi2022} Raed Jaberi, Minimum 2-edge strongly biconnected spanning directed subgraph problem, CoRR abs/2207.03401 $(2022)$ bibitem{Jaberi0} Raed Jaberi, \bibitem {KRY94} S. Khuller, B. Raghavachari, N.E. Young, Approximating the Minimum Equivalent Diagraph. SODA $(1994)$ $177$--$186$. \bibitem {LT79} T. Lengauer, R.E. Tarjan, A fast algorithm for finding dominators in a flowgraph. ACM Trans. Program. Lang. Syst. $1(1)$ ($1979$) $121$--$141$. \bibitem{Mehlhorn2017} Kurt Mehlhorn, Stefan Näher, Peter Sanders: Engineering DFS-Based Graph Algorithms. CoRR abs/1703.10023 (2017)
\bibitem{Mader85} W. Mader,
Minimal $n$-fach zusammenhängende Digraphen. J. Comb. Theory, Ser. B $38(2): 102$--$117 (1985)$
\bibitem{Mader71} W. Mader, Minimale n-fach kantenzusammenhängende Graphen. Math. Ann., $191:21$
--$28, 1971$
\bibitem{Mader72} W. Mader, Ecken vom Grad n in minimalen n-fach zusammenhängenden Graphen. Arch. Math. (Basel), $23:219$--$224, 1972$
\bibitem{Nuutila1994} Esko Nuutila, Eljas Soisalon-Soininen: On Finding the Strongly Connected Components in a Directed Graph, Inf. Process. Lett. $49(1): 9$--$14 (1994)$
\bibitem{Pearce2016} David J. Pearce: A space-efficient algorithm for finding strongly connected components, Inf. Process. Lett., $116(1)47$--$52(2016)$
\bibitem{TAARJAN72}
R. E. Tarjan, Depth First Search and Linear Graph Algorithms, SIAM J. Comput.,$1(2) (1972),146$--$160$ \bibitem {Raghavan06} S. Raghavan, Twinless Strongly Connected Components, Perspectives in Operations Research, ($2006$) $285$--$304$.
\bibitem{Schmidt2013}
Jens M. Schmidt:
A simple test on $2$-vertex- and $2$-edge-connectivity. Inf. Process. Lett. $113(7): 241$-$244 (2013)$ \bibitem{Sharir1981} M. Sharir. A strong-connectivity algorithm and its applications in data flow analysis. Computers and Mathematics with Applications, $7(1):67$--$72, (1981)$ \bibitem {VV00} S. Vempala, A. Vetta, Factor $4/3$ approximations for minimum $2$-connected subgraphs, APPROX. $(2000)$ $262$--$273$. \bibitem{Vetta2001} Adrian Vetta: Approximating the minimum strongly connected subgraph via a matching lower bound. SODA $417$--$426(2001)$
\bibitem{WG2010} Z. Wu, S. Grumbach, Feasibility of motion planning on acyclic and strongly connected directed graphs. Discret. Appl. Math. $158(9): 1017$--$1028 (2010)$
\bibitem {ZNI03} L. Zhao, H. Nagamochi, T. Ibaraki, A linear time $5/3$-approximation for the minimum strongly-connected spanning subgraph problem, Inf. Process. Lett. $86$ ($2003$) $63$--$70$. \end{thebibliography}
\end{document} | arXiv |
\begin{document}
\begin{abstract} The long-standing Alekseevskii conjecture states that a connected homogeneous Einstein space $G/K$ of negative scalar curvature must be diffeomorphic to $\RR^n$. This was known to be true only in dimensions up to $5$, and in dimension $6$ for non-semisimple $G$. In this work we prove that this is also the case in dimensions up to $10$ when $G$ is not semisimple. For arbitrary $G$, besides $5$ possible exceptions, we show that the conjecture holds up to dimension $8$. \end{abstract}
\maketitle
\section{Introduction}
A Riemannian manifold $(M^n, g)$ is called Einstein if its Ricci tensor satisfies $\Ricci(g) = c\, g$, for some $c\in \RR$. This is a very subtle condition, since it is too strong to allow general existence results, and at the same time too weak for obtaining obstructions in dimensions above $4$. It is therefore natural to consider the Einstein equation for a special class of metrics such as K\"ahler, Sasakian, with special holonomy, or with some symmetry assumption, among others (see \cite{LeBWang, cruzchica, Sparkssurvey, Wang2012} for further details and examples).
We study this equation on homogeneous manifolds. The classification of homogeneous Einstein spaces is naturally divided into cases according to the sign of the scalar curvature. Ricci-flat homogeneous manifolds are flat by \cite{AlkKml}. If the scalar curvature is positive, the manifold must be compact by Bonet-Myers' theorem, while a theorem of Bochner \cite{Bochner1948} implies that if it is negative, the manifold is non-compact. In the latter case, the following fundamental problem remains unsolved
\begin{AC}\cite[7.57]{Bss} Any connected homogeneous Einstein space of negative scalar curvature is diffeomorphic to a Euclidean space. \end{AC}
The purpose of the present article is to investigate this conjecture in low-dimensional spaces. Recall that in dimensions $2$ and $3$, Einstein metrics have constant sectional curvature. Simply-connected homogeneous Einstein $4$-manifolds were classified by G.\ Jensen in his thesis \cite{Jns}, and they are all isometric to symmetric spaces. In dimension $5$, non-compact homogeneous Einstein spaces $G/K$ were studied in \cite{Nkn1}, where it was shown that if $G\neq \Sl_2(\CC)$ then they are isometric to simply-connected Einstein solvmanifolds, and in particular diffeomorphic to $\RR^5$. In the recent work \cite{semialglow} the authors proved that the conjecture holds in dimension $6$, provided there exists a non-semisimple transitive group of isometries (a shorter proof of this fact was recently obtained in \cite{JblPtr}). Our first main result is the following
\begin{teointro}\label{main6} Let $(M^6,g)$ be a $6$-dimensional connected homogeneous Einstein space of negative scalar curvature, on which neither $\Sl_2(\CC)$ nor $\widetilde{\Sl_2(\RR)}\times \widetilde{\Sl_2(\RR)}$ acts transitively by isometries. Then, $M^6$ is diffeomorphic to $\RR^6$. \end{teointro}
Remarkably, the question of whether the $6$-dimensional simple Lie groups $\Sl_2(\CC)$ and $\widetilde{\Sl_2(\RR)}\times \widetilde{\Sl_2(\RR)}$ admit a left-invariant Einstein metric is still open. This is however not surprising if one recalls that the total number of homogeneous Einstein metrics on its compact counterpart $S^3 \times S^3$ is still unknown, even though the compact case has been much more investigated in the literature.
Our second main result confirms the conjecture in dimension $7$.
\begin{teointro}\label{main7} Any $7$-dimensional connected homogeneous Einstein space of negative scalar curvature is diffeomorphic to $\RR^7$. \end{teointro}
Besides the case of left-invariant metrics on two simple Lie groups and one very special homogeneous space, we show that the conjecture also holds in dimension $8$.
\begin{teointro}\label{main8} Let $(M^8,g)$ be an $8$-dimensional connected homogeneous Einstein space of negative scalar curvature which is de Rham irreducible. Assume that $(M^8,g)$ is not an invariant metric on the simply connected homogeneous space $\left(\Sl_2(\RR)\times \Sl_2(\CC)\right)/\Delta\U(1)$, and that neither $\widetilde{\Sl_3(\RR)}$ nor $\widetilde{\SU(2,1)}$ acts transitively by isometries. Then, $M^8$ is diffeomorphic to $\RR^8$.
\end{teointro}
It is important to remark that in Theorems \ref{main6}, \ref{main7} and \ref{main8} we actually obtain a stronger conclusion, namely that the spaces admit a simply-transitive solvable group of isometries (i.e.~ they are isometric to a \emph{solvmanifold}). We mention here that there is a stronger version of the conjecture, which is obtained by replacing the conclusion ``diffeomorphic to a Euclidean space'' by ``isometric to a simply-connected solvmanifold'' (this is commonly referred to as the \emph{strong Alekseevskii conjecture} in the literature, see \cite{JblPtr}). Both statements turn out to be equivalent when the isometry group is linear, and in fact at the present time all known-examples of homogeneous Einstein spaces with negative scalar curvature are isometric to simply-connected solvmanifolds.
Finally, we focus on the case where the presentation group is not semisimple. Our main result in this direction is the following
\begin{teointro}\label{mainnonuni} Let $(M^n,g)$ be a simply-connected non-compact homogeneous Einstein space of dimension less than or equal to $10$, which is de Rham irreducible. If $(M^n,g)$ admits a non-semisimple transitive group of isometries, then $M^n$ is diffeomorphic to $\RR^n$. \end{teointro}
Using a close link relating non-compact homogeneous Einstein spaces and expanding homogeneous Ricci solitons (cf.~\cite{HePtrWyl,alek} and \cite{Jbl13b}), Theorem \ref{mainnonuni} immediately implies the following result.
\begin{corollary} Let $(M^n,g)$ be a simply-connected expanding homogeneous Ricci soliton which is not Einstein, of dimension less than or equal to $9$, and which is de Rham irreducible. Then, $M^n$ is diffeomorphic to $\RR^n$. \end{corollary}
With regards to other previous known results on low-dimensional homogeneous Einstein spaces, we mention that the classification of simply-connected compact homogeneous Einstein manifolds was obtained in \cite{AleDotFer} in dimension $5$, and in \cite{Nkn04} in dimension $7$. Partial results in dimension $6$ may be found in \cite{NknRod03}. In \cite{BhmKrr} it was proved that all simply-connected compact homogeneous spaces of dimension less than $12$ admit a homogeneous Einstein metric. In the non-compact case, the classification of Einstein solvmanifolds in low dimensions was studied in \cite{finding, Wll03, NikitenkoNikonorov, FC13}.
The starting point for the proof of our main results are the structural results for non-compact homogeneous Einstein spaces given in \cite{alek}, and specially its more recent refinements proved in \cite{JblPtr}. Roughly speaking, these results state that the simply-connected cover of such a space admits a very special presentation of the form $G/K$, where $G = \left( G_1 A\right)\ltimes N$ is a semi-direct product of a nilpotent normal Lie subgroup $N$ and a reductive Lie subgroup $U = G_1 A$, with center $A$ and whose semisimple part $G_1 = [U,U]$ has no compact simple factors and contains the isotropy $K$. Moreover, the orbits of $U$ and $N$ are orthogonal at $eK$, the induced metric on $N$ is a homogeneous Ricci soliton, and the induced metric on $U/K$ satisfies an Einstein-like condition in which the action of $U$ on $N$ comes into play (see \eqref{eqRicU/K} below). In the present article we further improve those results by showing that the orbits of $A$ and $G_1$ are also orthogonal at $eK$ (Theorem \ref{thm_lemadimn}). This allows us to reduce the problem to solving the generalized Einstein equation \eqref{eqRicU/K} on $G_1/K$, which turns out to be a homogeneous space of dimension at most $7$ with semisimple transitive group. Moreover, as an application of our new structure refinements we present a short proof of a result of Jablonski \cite{Jbl13b} which states that homogeneous Ricci solitons are algebraic (Corollary~ \ref{cor_algebraic}).
The reduction to the simply-connected case is possible in dimensions $8$ and lower because we show that those spaces are isometric to solvmanifolds, thus allowing us to apply the results in \cite{Jab15}.
In order to study the Einstein equation (and its generalized version) in the semisimple case, we give in Table \ref{tabla} a complete classification of non-compact homogeneous spaces with a semisimple transitive group without compact simple factors, in dimensions up to $8$. The classification is based on that of the compact case, mainly given in \cite{BhmKrr}, and a duality procedure \cite{Nkn1}. It includes some infinite families, such as the non-compact analogs of the Aloff-Wallach spaces, and some other examples in dimension $8$. To solve the Einstein equation for homogeneous metrics on these spaces we proceed case by case, studying the isotropy representations, and in many cases the results from \cite{Nkn2} can be applied to conclude that there is no solution. However, in some cases --mostly in higher dimensions-- this is not enough, and a very detailed analysis of the Ricci curvature is carried out. As a by-product of this analysis, a general non-existence result for some cases where $\Sl_2(\RR)$ is one of the simple factors of the transitive group is given in Proposition \ref{Propsl2RxG1}.
One of the reasons why we are not able to extend Theorem \ref{mainnonuni} to dimensions $11$ and higher is that already in dimension $11$, examples such as $( \Sl_2(\CC) \cdot \RR )\ltimes \RR^4$ appear, with $N = \RR^4$ and $\Sl_2(\CC)$ acting non-trivially on it. The homogeneous Einstein equation for such a space reduces to an equation for left-invariant metrics on $\Sl_2(\CC)$ which is even more general than the Einstein equation.
The article is organized as follows. In Section \ref{prelimstruct} we state the structure theorems for non-compact homogeneous Einstein spaces, since they will be repeatedly used along the paper, and prove the new refinements metioned above. In Section \ref{semisimple} we prove Theorem \ref{thmsemisimple}, which deals with the semisimple case, and in order to do that we give the classification of non-compact semisimple homogeneous spaces up to dimension $8$. This, together with previously known results, already implies Theorem \ref{main6}. In Section \ref{sectionnonuni} we prove Theorem \ref{mainnonuni}, and then in Section \ref{strong} we focus on the strong Alekseevskii conjecture and complete the proofs of Theorems \ref{main7} and \ref{main8}.
\vs \noindent {\it Acknowledgements.} It is our pleasure to thank Jorge Lauret for fruitful discussions, and Christoph B\"ohm for providing useful comments on a draft version of this article.
Part of this research was carried out while the first author was a visitor at McMaster University. She is very grateful to the Department of Mathematics, the Geometry and Topology group and especially to McKenzie Wang for his kindness and hospitality.
\section{Structure of non-compact homogeneous Einstein spaces}\label{prelimstruct}
In this section we review the most important known facts about the algebraic structure of non-compact homogeneous Einstein spaces, since they will be crucial in the proof of our main results. Here and throughout the rest of the article, all manifolds under consideration are connected and all homogeneous spaces are almost-effective, unless otherwise stated.
\begin{theorem}[\cite{alek,JblPtr}]\label{structure} Let $(M,g)$ be a simply-connected homogeneous Einstein space with negative scalar curvature. Then, there exists a transitive Lie group of isometries $G$ whose isotropy at some point $p\in M$ is $K$, with the following properties: \begin{itemize}
\item[(i)] $G = \left(G_1 A\right) \ltimes N$, where $N$ is a nilpotent normal Lie subgroup, $U = G_1 A$ is a reductive Lie group with center $A = Z(U)$, and $G_1 = [U,U]$ is semisimple without any compact simple factors and contains the isotropy $K$.
\item[(ii)] The orbits of $U$ and $N$ are orthogonal at $p$.
\item[(iii)] The induced left-invariant metric $g_N$ on $N$ is a Ricci soliton (i.e.~ $(N, g_N)$ is a \emph{nilsoliton}).
\item[(iv)] The Ricci curvature of the induced $U$-invariant metric $g_{U/K}$ on $U/K$ is given by
\begin{equation}\label{eqRicU/K}
\ricci_{U/K}(Y,Y) = c \cdot g_{U/K}(Y,Y) + \tr \left(S\left(\theta(Y)\right)^2\right),
\end{equation}
for some $c<0$, where $\theta : \ug \to \Der(\ngo)$ is the corresponding infinitesimal action ($\ug = \Lie(U), \ngo = \Lie(N)$), $S(A) = \unm\left(A + A^t\right)$, and the transpose is taken relative to the nilsoliton inner product on $\ngo$.
\item[(v)] The infinitesimal action $\theta$ and the nilsoliton metric satisfy the following compatibility condition:
\begin{equation}\label{eqmmtheta}
\sum_i [\theta(Y_i), \theta(Y_i)^t] = 0,
\end{equation}
where the sum is taken over an orthonormal basis for $\ug$\footnote{See Remark \ref{remarks}, \eqref{remarkinfinitesimal} below.}. Moreover, $\theta(Y) = \theta(Y)^t$ for every $Y\in \zg(\ug)$. \end{itemize} Conversely, if a simply-connected homogeneous manifold admits a transitive group of isometries $G$ satisfying $(i)-(v)$, then it is Einstein, with negative scalar curvature. \end{theorem}
\begin{remark}\label{remarks} \begin{enumerate}[(a)]
\item\label{remarkinfinitesimal} Conditions (i) and (ii) may also be interpreted at the infinitesimal level, as follows:
Let $\ggo, \ug, \ngo, \kg$ be the Lie algebras of the groups $G, U, N, K$, respectively. We have that $\ggo = \ug \ltimes_\theta \ngo$, with $\ug$ a reductive subalgebra and $\ngo$ the nilradical of $\ggo$ (the maximal nilpotent ideal). Consider the reductive decomposition $\ggo = \kg \oplus \pg$ for $G/K$, where $\pg$ is the orthogonal complement of $\kg$ relative to the Killing form of $\ggo$. This induces a reductive decomposition $\ug = \kg \oplus \hg$ for the homogeneous space $U/K$, by letting $\hg := \pg \cap \ug$. The $G$-invariant metric $g$ on $G/K$ is thus identified with an $\Ad(K)$-invariant inner product $\ip$ on~ $\pg$, and one has that
\[
\langle \hg, \ngo \rangle = 0.
\]
For technical reasons, it is sometimes convenient to extend this inner product to an inner product on $\ggo$, which we will also denote $\ip$, by letting $\kg \perp\pg$ and choosing on $\kg$ some $\Ad(K)$-invariant inner product. By doing so, we clearly obtain $\langle \ug, \ngo \rangle = 0$. In fact, in condition (v), by an orthonormal basis of $\ug$ we mean that it is orthonormal with respect to the inner product extended as explained above.
\item\label{remarktheta} $\theta : \ug \to \Der(\ngo)$ is nothing but the adjoint representation of $\ggo$ co-restricted to act on the nilradical, that is,
\[
\theta(Y) X = [Y,X] \in \ngo, \qquad X\in \ngo, \,\, Y\in \ug.
\]
It was noticed by J.~Lauret that condition \eqref{eqmmtheta} is equivalent to $\theta$ being a zero of the moment map associated with the natural $\Gl(\ngo)$-action on the vector space $\End(\ug,\End(\ngo))$ (see \cite[Appendix]{semialglow} and \cite[$\S 2.1$]{JblPtr} for more details on this fact).
\item The Einstein constant of $(G/K,g)$ and the cosmological constant of the nilsoliton $(N,g_N)$ both coincide with the scalar $c<0$ in condition (iv).
\item According to the construction procedure for expanding algebraic solitons described in \cite[\S 5]{alek}, it is easy to see that given any non-compact Einstein homogeneous space $G/K$, we can always build another one with the same $U/K$ but with abelian nilradical.
\item The simply-connected hypothesis is not necessary for obtaining the results at the infinitesimal level. However, it turns out to be necessary for the converse assertion to hold. In particular, one question which still remains unanswered is whether any homogeneous Einstein space with negative scalar curvature is simply connected. This is known to be true when the universal cover is a solvmanifold, by the results in \cite{AC99, Jab15}.
\item\label{rmkDotti} If $G$ is a unimodular Lie group, then by \cite[Theorem 2]{Dtt88} it must in fact be semisimple, and hence it equals $G_1$. In this case, the only information that Theorem \ref{structure} provides is that $G_1$ has no compact simple factors.
\item On the other hand, if $(M,g)$ admits a non-semisimple transitive group of isometries, it follows from \cite{alek, JblPtr} that the group $G$ in Theorem \ref{structure} may be chosen to be non-unimodular. In this case, the so called ``mean curvature vector'' $H$, implicitly defined by
\[
\langle H, X \rangle = \tr \left( \ad X\right), \qquad \forall \, X\in\ggo,
\]
is non-zero.
\end{enumerate} \end{remark} By using that under the hypothesis of Theorem \ref{structure}, $G/K$ is diffeomorphic to the product manifold $G_1/K \times AN$, with $S = AN$ a simply-connected solvable Lie group, one obtains the following \begin{corollary}\label{reductionG1/K} Let $(M,g)$ be a simply-connected homogeneous Einstein space with negative scalar curvature, and let $G/K$ be the presentation given in Theorem \ref{structure}. Then, $M$ is diffeomorphic to a Euclidean space if and only if $G_1/K$ is so. \end{corollary}
It is important to notice that Theorem \ref{structure} does not state that the orbits of $Z(U)$ and $G_1$ are orthogonal at $p$. In other words, it is not known whether $\zg(\ug) \perp \ggo_1$ (where $\ggo_1= \Lie(G_1) = [\ug,\ug]$). This would be the most natural result to expect, since it would imply that there is a Levi decomposition $G = G_1 \ltimes S$ which is adapted to the geometry of $(M^n,g)$, in the sense that the orbits of $G_1$ and $S$ at $p$ are orthogonal. In what follows we prove that in fact one always has this nicer structure.
\begin{theorem}\label{thm_lemadimn} Let $(M,g)$ be a simply-connected homogeneous Einstein space of negative scalar curvature, and consider for it the presentation $G/K$ given in Theorem \ref{structure}. Then, $\zg(\ug)$ is orthogonal to $\ggo_1$. \end{theorem}
\begin{remark}\label{rmkn1}
If furthermore one has that $\theta|_{\ggo_1} = 0$, then $G/K$ is isometric to a Riemannian product $G_1/K \times S$ of Einstein homogeneous spaces of negative scalar curvature. Notice that condition $\theta|_{\ggo_1} = 0$ is trivially satisfied when $\dim \ngo = 1$. \end{remark}
\begin{proof} Following the notation from Remark \ref{remarks}, \eqref{remarkinfinitesimal}, equation \eqref{eqRicU/K} may be rewritten as an equation for endomorphisms of $\hg \simeq T_{eK} U/K$ as \begin{equation}\label{eqn_RicUK}
\Ricci_{U/K} = c \cdot I + C_\theta. \end{equation} Here, $\Ricci_{U/K} \in \End(\hg)$ denotes the Ricci operator of the homogeneous space $(U/K, g_{U/K})$, and $C_\theta \in \End(\hg)$ is the symmetric endomorphism given by \[ \langle C_\theta X, Y \rangle = \tr S(\theta(X))S(\theta(Y)), \qquad X,Y\in \hg. \] Since $\theta$ is defined on $\ug$ and not just on $\hg$, we may of course extend $C_\theta$ to a symmetric endomorphism of $\ug$, where $C_\theta (\kg) = 0$ (recall that the action of the isotropy is by skew-symmetric operators).
We have $\theta: \ug \to \End(\ngo)$, and by Theorem \ref{structure} $S(\theta(\zg(\ug)))$ is a family of pairwise commuting, symmetric operators in $\End(\ngo)$, which commute also with all of $\theta(\ug)$ (recall that $\theta$ is a Lie algebra representation). We may thus consider an orthogonal decomposition of $\ngo$ into common eigenspaces for the family $S(\theta(\zg(\ug)))$ (i.e.~ a weight-space decomposition): \begin{equation}\label{rootdec} \ngo = \ngo_1 \oplus \ldots \oplus \ngo_l, \end{equation}
with $\alpha_1, \ldots, \alpha_l \in \zg(\ug)^*$ the corresponding weights. The restricted representation $\theta_{\ggo_1} = \theta|_{\ggo_1}: \ggo_1 \to \End(\ngo)$ must preserve this weight-space decomposition. For each $k = 1,\ldots,l$ we haveß a \emph{co-restricted} representation of the semisimple Lie algbera $\ggo_1$, given by \[
\theta_{\ggo_1}^k := \pi_k \circ \theta |_{\ggo_1} : \ggo_1 \to \End(\ngo_k), \] where $\pi_k:\ngo \to \ngo_k$ is the orthogonal projection. Observe that, in particular, $\theta_{\ggo_1}^k(Y)$ is traceless for each $Y\in \ggo_1$ and $k=1,\ldots,l$.
Now we claim that for $Y\in \ggo_1$, $X\in \zg(\ug)$ one has that $\la C_\theta X, Y\ra = 0$. Indeed, using the orthogonality of the decomposition \eqref{rootdec}, and the fact that it is preserved by $\theta(\ug)$, we obtain \begin{align*}
\left\langle C_\theta \, Y, X \right\rangle &= \sum_{k=1}^l \tr \left( S\left(\theta_{\ggo_1}^k (Y)\right) \left(\alpha_k(X) \cdot I\right)\right) = \sum_{k=1}^l \alpha_k(X) \, \tr \theta_{\ggo_1}^k(Y) = 0. \end{align*}
Consider in $U/K$ the reductive decomposition $\ug = \kg \oplus \hg$. We may also assume that $\ggo_1 = \kg \oplus \hg_1$ is a reductive decomposition for $G_1/K$, where $\hg_1 \subseteq \hg$. Let $\qg$ be the orthogonal complement of $\hg_1$ in $\hg$, and let us show that $\qg = \zg(\ug)$. To that end, take $Y \in \qg$ and write it as $Y = Y_1 + Y_\zg$, where $Y_1 \in \hg_1$, $Y_\zg \in \zg(\ug)$. We now look at the Ricci curvature in the directions $Y_1$, $Y_\zg$. First, by~ \eqref{eqn_RicUK} and the above claim we obtain \[
\Ricci_{U/K}(Y_1, Y_\zg) = c \, \langle Y_1, Y_\zg\rangle = c\, \langle Y_1, Y-Y_1\rangle = - c\, \| Y_1\|^2 \geq 0, \] since $c<0$. On the other hand, we use that $Y_\zg \in \zg(\ug)$, $Y\perp [\ug,\ug]$, and the explicit formula for the Ricci curvature in the unimodular case (see \cite[7.38]{Bss}), to get \begin{align*}
\Ricci_{U/K}(Y_1, Y_\zg) =& -\unm\sum_{i,j}\langle [Y_1, X_i]_{\hg}, X_j \rangle \langle [Y_\zg, X_i]_{\hg}, X_j \rangle \\
& + \unc \sum_{i,j} \langle [X_i,X_j]_{\hg}, Y_1\rangle \langle [X_i,X_j]_{\hg}, Y_\zg\rangle - \unm \tr \ad_\ug Y_1 \ad_\ug Y_\zg \\
= &\, \unc \sum_{i,j} \langle [X_i,X_j]_{\hg}, Y_1\rangle \langle [X_i,X_j]_{\hg}, Y - Y_1\rangle \\
= & -\unc \sum_{i,j} \langle [X_i,X_j]_{\hg}, Y_1\rangle^2 \leq 0, \end{align*} where $\{ X_i\}$ is an orthonormal basis for $\hg.$ Hence, we must have equality, and $Y_1 = 0$. Therefore, $\qg = \zg(\ug)$, and the proof is finished.
\end{proof}
\begin{remark}\label{rmk_centerorthogonal} The previous theorem holds more generally for expanding homogeneous Ricci solitons. More precisely, if $(M^n, g)$ is an expanding (i.e. $c<0$) homogeneous Ricci soliton, and $G$ is the full isometry group, then by \cite{Jbl} the soliton is \emph{semi-algebraic}. Therefore by \cite{alek} the homogeneous space $G/K$ satisfies all the nice properties stated in Theorem \ref{structure}, but possibly without the additional conditions proven in \cite{JblPtr} for Einstein spaces (namely, $G_1$ might have compact simple factors, and the action of $\zg(\ug)$ on $\ngo$ might not be by symmetric endomorphisms). Nevertheless, Lemma 3.5 from \cite{JblPtr} still assures that by the compatibility condition \eqref{eqmmtheta} one has that the family $\theta(\zg(\ug)) \subset \End(\ngo)$ consists of normal operators, whose transposes commute with all of $\theta(\ug)$. Thus, one can also consider the decomposition \eqref{rootdec} as in the proof of Theorem~ \ref{thm_lemadimn}, and proceed in exactly the same way to conclude that $\zg(\ug) \perp \ggo_1$. \end{remark}
As a quick application we get an alternative proof of the following result of Jablonski \cite{Jbl13b}.
\begin{corollary}\label{cor_algebraic} Homogeneous Ricci solitons are algebraic. \end{corollary}
\begin{proof} As is well-known, the only non-trivial examples (that is, not locally isometric to the product of an Einstein homogeneous space and a flat factor $\RR^k$) occur in the expanding case (see the discussion in \cite[\S 2]{solvsolitons} and the references therein for more details). Let $(M,g)$ be an expanding homogeneous Ricci soliton. For the presentation $G/K$, where $G$ is the full isometry group, we have by Theorem \ref{thm_lemadimn} and Remark \ref{rmk_centerorthogonal} that $\zg(\ug) \perp \ggo_1$. Now recall that the mean curvature vector $H \in \hg \subset \ug$ is always orthogonal to $\ggo_1 = [\ug,\ug]$, since any representation of a semi-simple Lie algebra consists of traceless endomorphisms. Thus, $H \in \zg(\ug)$, and in particular \[
S(\ad H|_\hg) = 0. \] By applying Proposition 4.14 from \cite{alek} we conclude that the soliton is indeed algebraic. \end{proof}
Another application of our new structural results is the reduction of the classification problem (in the non-unimodular case) to the so called ``rank one'' case (cf.~ \cite[Theorem D]{Heb}).
\begin{corollary}\label{cor_rankone} Let $(M^n, g)$, $G/K$ be as in Theorem \ref{thm_lemadimn}, with $G$ non-unimodular. Consider $U_0$, $G_0$ the connected Lie subgroups of $U$, $G$ with Lie algebras $\ug_0 := [\ug,\ug]\oplus \RR H \subset \ug$, $\ggo_0 := \ug_0 \oplus \ngo$, respectively. Then, there is a diffeomorphism \[
M^n \simeq \RR^a \times G_0 / K, \qquad a = \dim Z(U) - 1, \] and the induced $G_0$-invariant metric on $G_0/K$ is Einstein with the same Einstein constant $c<0$ as $g$.
\end{corollary}
\begin{proof} Recall the following formula for the Ricci curvature of a homogeneous space, whose proof follows immediately from the proof of Proposition 6.1 in \cite{alek}. \begin{lemma} Let $(G/K,g)$ be a Riemannian homogeneous space with reductive decomposition $\ggo = \kg \oplus \pg$, and assume there exists $X\in \pg$ such that $[H,X] = 0$, and the subspace $\tilde\ggo := \{X\}^\perp$ is a codimension-one ideal of $\ggo$ that contains $H$ and $\kg$. Let $\widetilde G$ be the connected Lie subgroup of $G$ with Lie algebra $\tilde\ggo$, and consider the induced metric on the orbit $\widetilde G \cdot (eK) \simeq \widetilde G / K$. Then, the corresponding Ricci operators satisfy \[
\ricci_{G/K} |_{ \, \, \tilde \pg} = \ricci_{\widetilde G / K} + \unm \left[A,A^t\right], \]
where $\tilde \pg = \pg \cap \tilde\ggo$ and $A := \ad X|_{\tilde \pg} \in \End(\pg)$. \end{lemma}
Theorem \ref{thm_lemadimn} implies that $H\in \zg(\ug)$, and that any $X \in \zg(\ug)$ with $X \perp H$ satisfies the conditions of the above lemma. Moreover, the corresponding endomorphism $A$ is symmetric by Theorem~ \ref{structure},~ (v), thus the term $\unm [A,A^t]$ in the formula vanishes. By applying the lemma to any such $X$ we obtain a codimension-one submanifold $\tilde G / K$ in $G/K$ which with the induced metric is Einstein, with the same Einstein constant as $G/K$. Since the spaces are simply-connected, as differentiable manifolds we have that $G/K \simeq \RR \times \tilde G/ K$. After applying this procedure $a$ times, where $a = \dim Z(U)-1$, the corollary follows.
\end{proof}
To conclude this section we prove the following simple but useful formula for the Ricci curvature of a homogeneous space, which is in some way a generalization of \cite[Lemma 2.3]{Mln}.
\begin{lemma}\label{lem_formulaRicci} Let $(U/K,g)$ be a Riemannian homogeneous space with $U$ a unimodular Lie group, and consider a reductive decomposition $\ug = \kg \oplus \mg$. If $X, Y\in \mg$ are such that $[\kg,X]= [\kg,Y] = 0$, then \[
\Ricci(X,Y) = \unc \sum_{i,j} \langle [X_i,X_j]_\mg, X\rangle \langle [X_i,X_j]_\mg, Y\rangle - \tr S(\ad_\mg X) S(\ad_\mg Y), \] where $\{ X_i\}$ is any orthonormal basis for $\mg$ (here, $\ad_\mg X \in \End(\mg)$ stands for the restriction of $\ad X$ to $\mg$, projected onto $\mg$). Moreover, if $Y$ is orthogonal to the commutator ideal $[\ug,\ug]$ (i.e.~ to its projection onto $\mg$), then \[
\Ricci(X,Y) = - \tr S(\ad_\mg X) S(\ad_\mg Y), \qquad \forall \, X\in \mg \mbox{ such that } [\kg,X]=0. \] \end{lemma}
\begin{proof} From the formula \cite[7.38]{Bss} for the Ricci curvature of a homogeneous space, and using that $\ug$ is unimodular, we see that \begin{align*}
\Ricci(X,Y) =& -\unm \sum_{i,j} \langle [X,X_i]_\mg, X_j \rangle \langle [Y,X_i]_\mg, X_j \rangle \\
& + \unc \sum_{i,j} \langle [X_i,X_j]_\mg, X\rangle \langle [X_i,X_j]_\mg, Y\rangle - \unm B(X,Y) \\
=& -\unm \tr \left(\ad_\mg X\right) \left(\ad_\mg Y \right)^t - \unm \tr (\ad X)(\ad Y) \\
& + \unc \sum_{i,j} \langle [X_i,X_j]_\mg, X\rangle \langle [X_i,X_j]_\mg, Y\rangle, \end{align*} where $\{ X_i\}$ is an orthonormal basis for $\mg$. Notice that conditions $[\kg,X] = 0$ and $[\kg,Y] =~ 0$ imply that $\tr(\ad X)(\ad Y) = \tr(\ad_\mg X)(\ad_\mg Y)$. Then, the first formula follows. If moreover $Y\perp [\ug,\ug]_\mg$, then it is easy to see that $[\kg,Y]=0$, so the first formula applies, and the sum term in it disappears. \end{proof}
\section{Semisimple transitive group}\label{semisimple}
The main purpose of this section is to prove the following \begin{theorem}\label{thmsemisimple} Let $G$ be a semisimple Lie group and consider a homogeneous Einstein space $\left(G/H,g\right)$ which is de Rham irreducible. Assume that $\dim G/H \leq 8$, $\dim H \geq 1$ and that $G/H \neq$ $(\Sl_2(\RR) \times \Sl_2(\CC))\slash \Delta \U(1)$. Then, $(G/H,g)$ is an irreducible symmetric space of the non-compact type. \end{theorem}
The proof will follow from a case-by-case analysis. We warn the reader that, in contrast with the rest of the article, throughout this section the group $G$ will always be a semisimple Lie group.
\begin{definition}\label{defsshomogspace} We call a homogeneous space $G/H$ \emph{semisimple of the non-compact type} if $G$ is a semisimple Lie group without compact simple factors. \end{definition}
In view of Theorem \ref{structure}, we are reduced to studying the cases where $G/H$ is semisimple of the non-compact type. Moreover, we may restrict ourselves to the simply-connected case. Indeed, the universal cover of an Einstein manifold is still Einstein, and it is a classical result that symmetric spaces of the non-compact type do not admit non-trivial homogeneous quotients~ \cite{Car27}.
Following \cite{Al11, Nkn1}, we use the duality between compact and non-compact symmetric spaces to obtain the classification of semisimple homogeneous spaces of the non-compact type from the classification of compact homogeneous spaces in low dimensions ([BK]), as follows:
Given $G/H$ a semisimple homogeneous space of the non-compact type, let $\ggo = \ggo_1 \oplus \ldots \oplus \ggo_s$ be the decomposition of $\ggo$ into simple ideals --which are all of the non-compact type-- let $\kg\subseteq \ggo$ be a maximal compactly embedded subalgebra such that $\hg \subseteq \kg$, and for each $i=1,\ldots,s$ let $\kg_i = \ggo_i \cap \kg$, which is a maximal compactly embedded subalgebra of $\ggo_i$. The pairs $(\ggo_i, \kg_i)$ are symmetric pairs of the non-compact type (at the Lie algebra level), and its corresponding dual symmetric pairs $(\hat\ggo_i, \hat\kg_i)$ are of the compact type. If $\hat \ggo := \hat \ggo_1 \oplus \ldots\oplus \hat\ggo_s$, $\hat \kg := \hat\kg_1 \oplus \ldots \oplus \hat\kg_s$, then $\hat \kg$ and $\kg$ are isomorphic Lie algebras, and via this isomorphism we can consider the subalgebra $\hat \hg \subseteq \hat \kg$ corresponding to $\hg \subseteq \kg$. The effective homogeneous space $\hat G/ \hat H$ associated with $\hat \ggo, \hat\hg$ is compact.
Therefore, in order to obtain all possible spaces $G/H$ as above one can argue as follows: \begin{itemize}
\item Classify all compact homogeneous spaces in ``canonical presentation'' (in the sense of \cite{BhmKrr}).
\item For each compact homogeneous space $(\hat G/ \hat H)$ in the previous classification, consider all possible compact symmetric pairs $(\hat \ggo,\hat\kg)$ with $\hat \hg \subseteq \hat \kg$, where $\Lie(\hat G) = \hat\ggo$, $\Lie(\hat H) = \hat \hg$ (there may be none at all).
\item For each such pair, let $(\ggo,\kg)$ be its dual, obtained by dualizing each simple factor to its non-compact counterpart. The isomorphism $\kg \simeq \hat \kg$ defines a subalgebra $\hg\subseteq \kg$ isomorphic to $\hat \hg$, and from $\ggo, \hg$ one obtains a non-compact homogeneous space $G/H$ as desired. \end{itemize}
We note that if a non-compact $G/H$ is obtained from a compact $\hat G/ \hat H$, then the Lie groups $H$ and $\hat H$ are isomorphic, and moreover the isotropy representations are equivalent. \begin{remark} To obtain the classification of all non-compact homogeneous spaces with semisimple transitive group (i.e.~ taking into account that $G$ may have compact simple factors), the duality procedure works in the very same way. One only needs to dualize the symmetric pairs which are of the non-compact type. \end{remark}
We give in Table \ref{tabla} the classification of simply-connected, semisimple homogeneous spaces of the non-compact type (cf.~ Definition \ref{defsshomogspace}), together with its corresponding compact duals, the compact symmetric space used in each case for the dualization procedure, and the decomposition of the isotropy representation into irreducible summands. Notice also that, for notational purposes, some of the non-compact spaces in the table are not simply connected, but still they are to be read as their universal covers. Symmetric spaces are not included, since a list of all irreducible symmetric spaces can be found for instance in \cite[p.~200]{Bss}. We also do not include cases which are product of lower dimensional homogeneous spaces, unless the space admits non-product invariant metrics (see Proposition \ref{prodRiem} below). Our notation follows that of \cite{Bss}, with the only exception of $\SU(1,1)$, which we call $\Sl_2(\RR)$.
\afterpage{
\thispagestyle{empty}
\begin{landscape}
\scriptsize
\centering
\begin{tabular}{|>{\hspace{-3pt}}c<{\hspace{-3pt}}|c|c|c|c|>{\hspace{-3pt}}c<{\hspace{-3pt}}|}
\hline
$\dim$ & $\hat G/ \hat H$ (compact) & $\hat G/ \hat K$ (symmetric) & $G/H$ (noncompact) & Isotropy representation & Note \\
\hline
\multirow{1}{*}{$3$} & $\SU(2)/\{\hbox{id}\}$ & $\SU(2)/\U(1)$ & $\Sl_2(\RR)/\{\hbox{id}\}$ & Lie group & \\ \cline{2-6}
\hline
\multirow{4}{*}{$5$} & \multirow{3}{*}{$(\SU(2)\times\SU(2))/\Delta_{p,q}\U(1)$} & $(\SU(2){\times} \SU(2))/\Delta \SU(2)$ & $\Sl_2(\CC)/\U(1)$ & $\qg_1^{(2)}\oplus \pg_0^{(1)}\oplus \pg_1^{(2)}, \, \qg_1 \simeq \pg_1$ & \\ \cline{3-6}
& & \multirow{2}{*}{$\SU(2)/\U(1) {\times} \SU(2)/\U(1)$} & \multirow{2}{*}{$(\Sl_2(\RR)\times \Sl_2(\RR))/\Delta_{p,q} \SO(2)$} & $\qg_0^{(1)}\oplus\pg_1^{(2)} \oplus \pg_2^{(2)},$ & \multirow{2}{*}{\ref{S2xS3}} \\
& & & & $\pg_1\simeq \pg_2 \iff p=q $ & \\ \cline{2-6}
& $\SU(3)/\SU(2)$ & $\SU(3)/\U(2)$ & $\SU(2,1)/\SU(2)$ & $\qg_0^{(1)}\oplus\pg_1^{(4)}$ & \\ \cline{2-6}
\hline
\multirow{5}{*}{$6$} & \multirow{2}{*}{$(\SU(2) \times \SU(2))/\{\hbox{id}\}$} & $(\SU(2)\times \SU(2))/\Delta \SU(2)$ & $\Sl_2(\CC)/\{\hbox{id}\}$ & Lie group & \\ \cline{3-6}
& & $\left(\SU(2)/\U(1)\right)^2$ & $(\Sl_2(\RR)\times \Sl_2(\RR))/\{\hbox{id}\}$ & Lie group & \\ \cline{2-6}
& $\Spe(2)/\Spe(1) \U(1)$ & $\Spe(2)/\Spe(1) \Spe(1)$ & $\Spe(1,1)/\Spe(1) \U(1)$ & $\qg_1^{(2)} \oplus \pg_1^{(4)}$ & \\ \cline{2-6}
& $G_2/\SU(3)$ & - & - & Irreducible & \ref{isotropyirred} \\ \cline{2-6}
& $\SU(3)/T^2$ & $\SU(3)/\U(2)$ & $\SU(2,1)/T^2$ & $\qg_1^{(2)} \oplus \pg_1^{(2)} \oplus \pg_2^{(2)}$ & \\ \cline{2-6}
\hline
\multirow{14}{*}{$7$} & $\Spein(7)/G_2$ & - & - & Irreducible & \ref{isotropyirred} \\ \cline{2-6}
& $\Spe(2)/\SU(2)$ & - & - & Irreducible & \ref{isotropyirred} \\ \cline{2-6}
& $\SU(4)/\SU(3)$ & $\SU(4)/\U(3)$ & $\SU(3,1)/\SU(3)$ & $\qg_0^{(1)}\oplus\pg_1^{(6)}$ & \\ \cline{2-6}
& $\Spe(2)/\Spe(1)$ & $\Spe(2)/\Spe(1)\Spe(1)$ & $\Spe(1,1)/\Spe(1)$ & $\qg_0^{(3)}\oplus\pg_1^{(4)}$ & \\ \cline{2-6}
& \multirow{2}{*}{$\SO(5)/\SO(3)$} & $\SO(5)/\SO(4)$ & $\SO(4,1)/\SO(3)$ & $\qg_1^{(3)}\oplus \pg_0^{(1)}\oplus \pg_1^{(3)}$, $\qg_1 \simeq \pg_1$ & \\ \cline{3-6}
& & $\SO(5)/\SO(3)\SO(2)$ & $\SO(3,2)/\SO(3)$ & $\qg_0^{(1)}\oplus\pg_1^{(3)}\oplus\pg_2^{(3)}$, $\pg_1 \simeq \pg_2$ & \\ \cline{2-6}
& \multirow{4}{*}{$\SU(3)/\Delta_{p,q}\U(1)$} & \multirow{3}{*}{$\SU(3)/\U(2)$} & \multirow{3}{*}{$\SU(2,1)/\Delta_{p,q}\U(1)$} & $\qg_0^{(1)} \oplus \qg_1^{(2)} \oplus \pg_1^{(2)} \oplus \pg_2^{(2)}$, & \\
& & & & $\pg_1\simeq \pg_2 \iff p=q=1,$ & \ref{Aloff-Wallach} \\
& & & & $\qg_1 \simeq \pg_1 \iff p=0, q=1 $ & \\ \cline{3-6}
& & $\SU(3)/\SO(3)$ & $\Sl_3(\RR)/\SO(2)$ & $\qg_1^{(2)}\oplus\pg_0^{(1)}\oplus\pg_1^{(2)}\oplus\pg_2^{(2)}, \, \qg_1\simeq\pg_1$ & \\ \cline{2-6}
&$(\SU(3){\times}\SU(2))/\Delta_{p,q}\U(1)(\SU(2){\times}\{\hbox{id}\})$ & $\SU(3)/\U(2) {\times} \SU(2)/\U(1)$ & $(\SU(2,1){\times}\Sl_2(\RR))/\Delta_{p,q}\U(1)(\SU(2){\times}\{\hbox{id}\})$ & $\qg_0^{(1)} \oplus \pg_1^{(4)} \oplus \pg_2^{(2)} $ & \ref{S2xS3} \\ \cline{2-6}
& $(\SU(2)\times \SU(2) \times \SU(2))/\Delta_{a,b,c}T^2$ & $\left(\SU(2)/\U(1)\right)^3$ & $(\Sl_2(\RR)\times \Sl_2(\RR) \times \Sl_2(\RR))/\Delta_{a,b,c}T^2$ & $\qg_0^{(1)} \oplus \pg_1^{(2)}\oplus\pg_2^{(2)}\oplus\pg_3^{(2)}$ & \\
\hline
\multirow{12}{*}{$8$} & \multirow{2}{*}{$\SU(3)/\{\hbox{id}\}$} & $\SU(3)/\U(2)$ & $\SU(2,1)/\{\hbox{id} \}$ & Lie group & \\ \cline{3-6}
& & $\SU(3)/\SO(3)$ & $\Sl_3(\RR)/\{\hbox{id} \}$ & Lie group & \\ \cline{2-6}
& \multirow{2}{*}{$\Spe(2)/T^2$} & $\Spe(2)/\U(2)$ & $\Spe(2,\RR)/T^2$ & $\qg_1^{(2)}\oplus \pg_1^{(2)}\oplus \pg_2^{(2)}\oplus \pg_3^{(2)}$ & \\ \cline{3-6}
& & $\Spe(2)/\Spe(1)\Spe(1)$ & $\Spe(1,1)/T^2$ & $\qg_1^{(2)}\oplus \qg_2^{(2)}\oplus \pg_1^{(2)}\oplus \pg_2^{(2)}$ & \\ \cline{2-6}
& \multirow{4}{*}{$(\SU(2)\times\SU(2)\times\SU(2))/\Delta_{a_1,a_2,a_3}\U(1)$} & \multirow{2}{*}{$\SU(2)/\U(1){\times}\left(\SU(2)^2/\Delta\SU(2) \right)$}& \multirow{2}{*}{$(\Sl_2(\RR)\times\Sl_2(\CC))/\Delta_{p,q} \U(1)$} & $\qg_0^{(1)}\oplus \qg_1^{(2)}\oplus \pg_0^{(1)} \oplus \pg_1^{(2)} \oplus \pg_2^{(2)}, $ & \multirow{4}{*}{\ref{a1a2a3}} \\
& & & & $ \,\qg_1\simeq\pg_1, \pg_1\simeq \pg_2 \iff p=q$ & \\ \cline{3-5}
& & \multirow{2}{*}{$(\SU(2)/\U(1))^3$} & \multirow{2}{*}{$(\Sl_2(\RR)\times\Sl_2(\RR)\times\Sl_2(\RR))/\Delta_{a_1,a_2,a_3}\U(1)$} & $\qg_0^{(2)} \oplus \pg_1^{(2)} \oplus \pg_2^{(2)} \oplus \pg_3^{(2)},$ & \\
& & & & $\pg_i\simeq \pg_j \iff a_i=a_j$ & \\ \cline{2-6}
& \multirow{3}{*}{$\SU(2)\times(\SU(2)\times \SU(2))/\Delta_{p,q}\U(1)$} & $\SU(2)/\U(1){\times}\left(\SU(2)^2/\Delta\SU(2) \right)$ & $\Sl_2(\RR) \times \Sl_2(\CC)/ \U(1)$ & $\qg_0^{(1)}\oplus \qg_1^{(2)} \oplus \pg_0^{(3)} \oplus \pg_1^{(2)},\, \qg_1 \simeq \pg_1$ & \\ \cline{3-6}
& & \multirow{2}{*}{$(\SU(2)/\U(1))^3$} & \multirow{2}{*}{$\Sl_2(\RR)\times(\Sl_2(\RR)\times\Sl_2(\RR))/\Delta_{p,q}\U(1)$} & $\qg_0^{(2)} \oplus \pg_0^{(2)} \oplus \pg_1^{(2)}\oplus \pg_2^{(2)}$ & \multirow{2}{*}{\ref{S2xS3}} \\
& & & & $\pg_1\simeq \pg_2 \iff p = q$ & \\ \cline{2-6}
& $\SU(2)\times (\SU(3)/\SU(2))$ & $\SU(2)/\U(1) \times \SU(3)/\U(2)$ & $\Sl_2(\RR) \times \SU(2,1)/\SU(2)$ & $\qg_0^{(2)}\oplus \pg_0^{(2)} \oplus \pg_1^{(4)} $ & \\ \cline{2-6}
\hline
\end{tabular}
\captionof{table}{Non-symmetric, non-product, non-compact homogeneous spaces with semisimple transitive group without compact simple factors, and its corresponding compact duals, in dimensions less than or equal to $8$.}
\label{tabla}
\end{landscape}
}
Regarding the list of compact homogeneous spaces in canonical presentation, we refer the reader to \cite{BhmKrr}. All the embedings of the isotropy subgroup are clear once the corresponding compact symmetric space used for the dualization is taken into account. The precise meaning of the parameters corresponding to abelian subgroups in the isotropy may be found in \cite[~\S 1]{Nkn04}.
The information on the isotropy representation is to be understood as follows: for a space $G/H$, consider $\hg \subseteq \kg \subseteq \ggo$ as above, where $\kg$ is a maximal compactly embedded subalgebra with corresponding connected subgroup $K$. Take the corresponding Cartan decomposition $\ggo = \kg \oplus \pg$ (cf.~\cite[pp.~182]{Helgason}), and let $\qg$ be an $\Ad(K)$-invariant complement for $\hg$ in $\kg$. Setting $\mg = \qg \oplus \pg$, we obtain a reductive decomposition $\ggo = \hg \oplus \mg$ for the homogeneous space $G/H$. Whenever we write $\sum_i \qg_i^{(a_i)} \oplus \sum_j \pg_j^{(b_j)}$ we mean that \[
\qg = \sum_i \qg_i^{(a_i)}, \qquad \pg = \sum_j \pg_j^{(b_j)}, \qquad \dim \qg_i^{(a_i)} = a_i, \quad \dim \pg_j^{(b_j)} = b_j, \] and for $i,j \neq 0$, each summand $\qg_i^{(a_i)}$, $\pg_j^{(b_j)}$ is an irreducible $\Ad(H)$-module, where any two such modules are inequivalent unless otherwise stated. The $0$ sub-index stands for trivial modules (i.e.~ $[\hg,\qg_0^{(a_0)}] = [\hg,\pg_0^{(b_0)}] = 0$).
\noindent \emph{Notes on Table \ref{tabla}:}
\begin{enumerate}[(a)]
\item\label{Aloff-Wallach} $p,q\in \ZZ$, $0\leq p \leq q $, $\operatorname{gcd}(p,q) = 1$. See \cite{Wng82}.
\item\label{S2xS3} $p,q\in \ZZ-\{0\}$, $p\leq q$, $\operatorname{gcd}(p,q) = 1$.
\item\label{isotropyirred} These compact spaces are isotropy irreducible but non-symmetric (see \cite[pp.~ 203]{Bss}). Clearly, they do not admit any non-compact counterpart.
\item\label{a1a2a3} $a_1,a_2,a_3 \in \ZZ-\{0\}$, $a_1\leq a_2 \leq a_3$, $\operatorname{gcd}(a_1,a_2,a_3) = 1$ (the order may be assumed up to equivariant diffeomorphism, by using outer automorphisms given by the Weyl group; the parameters are all nonzero since otherwise the space splits as a product, and this are considered as a separate case). The space $(\Sl_2(\RR) \times \Sl_2(\CC))\slash \Delta_{p,q} \U(1)$ is obtained only when $a_2 = a_3$. For convenience, we have renamed the parameters as $p=a_1$, $q = a_2 = a_3$. \end{enumerate}
Recall that by \cite[Theorem 1]{Nkn2}, if a $G$-invariant metric makes the chosen Cartan decomposition orthogonal (i.e.~it is such that $\langle \qg, \pg \rangle = 0$), then $(G/H,g)$ is not Einstein. In particular, if we consider a decomposition of $\mg$ into irreducible $\Ad(H)$-modules given by $\qg = \qg_1 \oplus \ldots \oplus \qg_u$, $\pg = \pg_1 \oplus \ldots \oplus \pg_v$ (recall that $\qg$ and $\pg$ are $\Ad(H)$-invariant), and none of the $\qg_i$ is equivalent to any of the $\pg_j$, then every $G$-invariant metric on $G/H$ satisfies $\langle \qg,\pg\rangle=0$, and thus none of them is Einstein \cite[Corollary]{Nkn2}. It may be the case that a single Cartan decomposition is not orthogonal with respect to \emph{every} $G$-invariant metric, and still every metric makes \emph{some} Cartan decomposition orthogonal (recall that a Cartan decomposition is only unique up to the action of inner automorphisms). In \cite[Theorem~ 2]{Nkn2}, necessary and sufficient conditions are given for this to happen.
The following result is well-known, but we include a proof of it for the sake of completeness.
\begin{proposition}\label{prodRiem} Let $G_1\slash H_1$, $G_2\slash H_2$ be two homogeneous spaces such that the isotropy representation of $G_1/H_1$ acts non-trivially on every invariant subspace. Then, any $\left(G_1 \times G_2\right)$-invariant metric on $\left(G_1 \times G_2\right)\slash \left(H_1 \times H_2\right)$ is a Riemannian product of invariant metrics on each factor. \end{proposition} \begin{proof}
If $\ggo_1=\hg_1\oplus\mg_1$ and $\ggo_2=\hg_2\oplus\mg_2$ are reductive decompositions of $G_1/H_1$ and $G_2/H_2$ respectively, then $\ggo_1\oplus\ggo_2=(\hg_1\oplus\hg_2)\oplus(\mg_1\oplus\mg_2)$ is a reductive decomposition of $G_1 \times G_2/H_1 \times H_2.$ Let $\pg_i\subseteq \mg_i$ be $\ad(\hg_i)$-irreducible subspaces, $i=1,2$. We know that $\ad(\hg_1)|_{\pg_1}$ is non-trivial. If there was an intertwining operator $T:\pg_1\rightarrow\pg_2$, i.e, \[
T\circ\ad(Z)|_{\pg_1}=\ad(Z)|_{\pg_2}\circ T, \quad \mbox{for all } Z\in\hg_1\oplus\hg_2, \]
we could take $Z=(Z_1,0) \in \hg_1\oplus\hg_2$ and would have that $T\circ\ad(Z_1)|_{\pg_1}=0,$ for all $Z_1 \in \hg_1,$ so $\ad(Z_1)|_{\pg_1}=0$, for all $Z_1 \in \hg_1,$ which is a contradiction. \end{proof}
We are now in a position to start the case-by-case analysis.
\subsection{$\dim G/H \leq 7$}$ $
After having computed all the isotropy representations, we see that in most of the spaces of dimension up to $7$ in Table \ref{tabla}, the Cartan decomposition we have chosen is such that $\qg$ and $\pg$ share no equivalent modules, and thus these spaces admit no $G$-invariant Einstein metric. The exceptions are the following: \[
\Sl_2(\CC)/\U(1), \quad \SO(4,1)/\SO(3), \quad \SU(2,1)/\Delta_{p,q}\U(1), \quad \Sl_3(\RR)/\SO(2). \] Non-existence of homogeneous Einstein metrics on $\Sl_3(\RR)/\SO(2)$ was established in \cite[~Example 4]{Nkn2} by finding, for every $G$-invariant metric, a suitable Cartan decomposition which is orthogonal. By applying the same methods and a straightforward computation, it can be shown that the space $\SO(4,1)/\SO(3)$ also satisfies the hypotheses of \cite[Theorem 2]{Nkn2}, and hence it admits no homogeneous Einstein metric.
\subsubsection{$\Sl_2(\CC)/\U(1)$}\label{sectionsl2C}
Unfortunately, this space admits invariant metrics for which there is no orthogonal Cartan decomposition.
Consider the following ordered basis $\mathcal{B}$ for $\slg_2(\CC)$ \begin{align}\label{matricessl2C}
Z &= \twomatrix{i}{0}{0}{-i}, \quad Y_0 = \twomatrix{1}{0}{0}{-1}, \quad Y_1 = \twomatrix{0}{1}{1}{0},\\
Y_2 &= \twomatrix{0}{i}{-i}{0}, \quad X_1 = \twomatrix{0}{1}{-1}{0}, \quad X_2 = \twomatrix{0}{i}{i}{0}.\nonumber \end{align}
The isotropy subalgebra is given by $\hg = \RR Z$, $\mg = \operatorname{span}_\RR \{ Y_0, Y_1, Y_2, X_1, X_2\}$ is a reductive complement, and it decomposes into irreducible modules as $\mg = \pg_0 \oplus \pg_1 \oplus \qg_1$, where $\pg_0 = \RR Y_0$, $\pg_1 = \RR Y_1 \oplus \RR Y_2$, $\qg_1 = \RR X_1 \oplus \RR X_2$. Also, if $\kg = \hg \oplus \qg_1 \simeq \sug(2)$, $\pg = \pg_0 \oplus \pg_1$, then $\slg_2(\CC) = \kg \oplus \pg$ is a Cartan decomposition. Let us fix an inner product $\ip_B$ on $\slg_2(\CC)$ that makes $\mathcal{B}$ orthonormal (this inner product is, up to a scalar multiple, the one given by the Killing form of $\slg_2(\CC)$, after reversing its sign on the subalgebra~ $\kg$). Finally, let $\ip_0 = \ip_B \big|_{\mg \times \mg}$, which is of course $\Ad(\U(1))$-invariant.
\begin{lemma}\label{lemmasl2C} Up to isometry, $\Sl_2(\CC)$-invariant metrics on $\Sl_2(\CC)\slash\U(1)$ can be parameterized by $\Ad(\U(1))$-invariant inner products on $\mg$ of the form \[
\langle \cdot, \cdot \rangle_h = \langle h\, \cdot\, , h\, \cdot \rangle_0, \] where $h\in \Gl_5(\RR)$ is given by \[
h = \left[\begin{array}{ccccc} e &0 &0 &0 & 0\\ 0& a &0 &0 &0 \\0 & 0& a & 0&0 \\0 &0 & -d& b &0 \\ 0&d &0 &0 & b \end{array}\right], \qquad a,b,d,e \in \RR, \quad a,b,e\neq 0. \] Moreover, for each such metric, the Ricci curvature satisfies \[
\Ricci(h^{-1 }Y_1, h^{-1} X_2) = 4 \, d \cdot \left( (a^2 - e^2)^2 + a^2(b^2 + d^2) \right)a^{-3} b^{-2} e^{-2}. \] \end{lemma}
\begin{proof} Since the modules $\pg_1$ and $\qg_1$ are the only equivalent modules, and they are of complex type, it is clear that the metrics are parameterized by inner products $\ip_h$ on $\mg$, where $h$ is as in the statement, but with a $2\times 2$ block of the form $\minimatrix{c}{-d}{d}{c}$ mapping $\pg_1$ to $\qg_1$. Using that $\ip_h$ and $\ip_{h \cdot T}$ give rise to isometric metrics for any $T = \Ad\left(\exp{t Y_0}\right) \in \Aut(\slg_2(\CC))$, it is easy to find $t$ so that the matrix $h \cdot T$ has the desired form.
The formula for the Ricci curvature follows from a routine (though somewhat lengthy) computation. \end{proof}
The importance of the previous formula for the Ricci curvature is that this off-diagonal entry vanishes if and only if $d=0$ (recall also that $\langle h^{-1} Y_1, h^{-1} X_2 \rangle_h = 0$). But if $d=0$ then the Cartan decomposition is orthogonal, and the metric is non-Einstein.
\subsubsection{$\SU(2,1)/\Delta_{p,q}\U(1)$} These spaces are the non-compact analogous of the well-known Aloff-~Wallach spaces \cite{AW75}. As long as $p\neq 0$, the Cartan decomposition is orthogonal with respect to any $\SU(2,1)$-invariant metric, hence none of them is Einstein by \cite{Nkn2}. However, the space corresponding to $p=0$, $q=1$ admits $\SU(2,1)$-invariant metrics which make no Cartan decomposition orthogonal. Let us have a closer look at the Lie algebra $\sug(2,1)$: an $\Ad(\Delta_{0,1}\U(1))$-invariant decomposition is given by $\sug(2,1) = \hg_{0,1} \oplus \qg_0 \oplus \qg_1 \oplus \pg_1 \oplus \pg_2$, where \begin{align*}
\hg_{0,1} &= \RR \threematrix{0}{0}{0}{0}{i}{0}{0}{0}{-i}, \quad \qg_0 = \RR \threematrix{2i}{0}{0}{0}{-i}{0}{0}{0}{-i},
\quad \qg_1 = \left\{ \threematrix{0}{z}{0}{-\bar{z}}{0}{0}{0}{0}{0} : \, z\in \CC \right\}, \\
& \pg_1 = \left\{ \threematrix{0}{0}{z_1}{0}{0}{0}{\bar{z_1}}{0}{0} : \, z_1\in \CC \right\}, \quad
\pg_2 = \left\{ \threematrix{0}{0}{0}{0}{0}{z_2}{0}{\bar{z_2}}{0} : \, z_2\in \CC \right\}, \end{align*} and the modules $\qg_1$ and $\pg_1$ are equivalent. Any invariant metric would then make the subspaces $\qg_0$, $\pg_2$ and $\qg_1\oplus \pg_1$ orthogonal. But observe that $\ad(\qg_0)$ acts trivially on $\qg_0$ and $\pg_2$, and it acts precisely as the isotropy $\hg_{0,1}$ on $\qg_1\oplus \pg_1$. This immediately implies that for any invariant metric, $\ad(\qg_0)$ consists of skew-symmetric endomorphisms, and hence by Lemma \ref{lem_formulaRicci} the Ricci curvature is non-negative in this directions. Therefore, $\SU(2,1)\slash \Delta_{0,1}\U(1)$ admits no invariant metrics of negative Ricci curvature.
\subsection{$\dim G/H = 8$} $ $
The first two spaces of dimension $8$ in Table \ref{tabla} are Lie groups and will be omitted. The next two cases correspond to homogeneous spaces $G/H$ where $\rank G = \rank H$. As is well known, this implies that the isotropy representation decomposes a sum of pairwise inequivalent modules. Clearly, any Cartan decomposition will be orthogonal with respect to any $G$-invariant metric, and thus none of those can be Einstein by \cite{Nkn2}. For the infinite family of homogeneous spaces $\left(\Sl_2(\RR) \times \Sl_2(\RR) \times \Sl_2(\RR)\right) / \Delta_{a_1,a_2,a_3} \U(1)$ ($6$-th line in the table), the isotropy representation may have some equivalent modules in some special cases, but they are all contained in the subspace $\pg$ of the Cartan decomposition. This implies that for any $G$-invariant metric one still has $\qg\perp\pg$, and none of them can be Einstein.
Let us now consider the following spaces: \[ \Sl_2(\RR)\times \Sl_2(\CC)/\U(1),\quad \Sl_2(\RR) \times \left(\Sl_2(\RR)\times \Sl_2(\RR)\right)/\Delta_{p,q} \U(1), \quad \Sl_2(\RR) \times \SU(2,1)/\SU(2). \] They are all of the form $\Sl_2(\RR) \times G_1 / H$, for some semisimple Lie group $G_1$. Notice that all of them admit metrics which are not Cartan-orthogonal for \emph{any} Cartan decomposition, and hence \cite{Nkn2} can not be applied. Another problem that arises when studying the Einstein equation in these spaces is that whenever the isotropy representation of the space $G_1/H$ has some trivial modules, then the space $G/H$ admits non-product $G$-invariant metrics (cf.~ Proposition \ref{prodRiem}). However, there is still \emph{some} control on such trivial modules. Namely, an easy computations with Lie brackets shows that for the spaces under consideration we have $[\mg_0,\mg_0] \subseteq \hg$, where $\mg_0$ represents the trivial module in $G_1/H$. By looking at the Ricci curvature of $G/H$ at $eH$ in directions tangent to the orbit of $G_1$, and in directions orthogonal to this orbit, we were able to show that if the Ricci curvature preserves this orthogonality, then this forces the metric to be a product (clearly, for an Einstein metric such orthogonality would automatically be preserved by the Ricci curvature). Since $\Sl_2(\RR)$ does not admit any left-invariant Einstein metric, this proves that $G/H$ does not either.
\begin{proposition}\label{Propsl2RxG1} Let $G/H = \Sl_2(\RR)\times \left( G_1/ H\right)$ be a homogeneous space with $G_1$ semisimple, and assume that $N_{G_1}(H) / H$ is abelian. Then, $G/H$ admits no $G$-invariant Einstein metric. \end{proposition}
\begin{proof} Assume that there exists a $G$-invariant Einstein metric $g$ on $G/H$. Let $\ggo_1 = \hg \oplus \mg$ be an $\Ad(H)$-invariant decomposition, and further decompose $\mg = \mg_0 \oplus \mg_1$, where $\hg \oplus \mg_0 = \Lie(N_{G_1}(H))$. Then $\mg_0 = \{X\in \mg : [\hg,X] = 0 \}$ is the trivial $\Ad(H)$-module, $\mg_1$ is the sum of all non-trivial $\Ad(H)$-modules of $\mg$, and our assumption implies that $[\mg_0,\mg_0]\subseteq \hg$. By setting $\pg_0 = \slg_2(\RR) \oplus \mg_0$, $\pg = \pg_0 \oplus \mg_1$, we have a reductive decomposition for $G/H$ given by $\ggo = \hg \oplus \pg$, and $\pg_0$ corresponds to the trivial module. In particular, $\pg_0 \perp \mg_1$. Setting $\lgo := \mg_0^\perp \subseteq \pg_0$, we obtain the orthogonal decomposition \[
\pg = \rlap{$\overbrace{\phantom{\lgo\overset{\perp}\oplus\mg_0}}^{\pg_0}$} \lgo \overset{\perp}\oplus \underbrace{\mg_0\overset{\perp}\oplus\mg_1}_\mg. \] Our assumption $[\mg_0,\mg_0]\subseteq \hg$ implies that the following bracketing relations are satisfied: \begin{align}
[\hg,\pg_0] &= 0, & [\hg,\mg_1] &\subseteq \mg_1, & [\lgo,\lgo] &\subseteq [\pg_0,\pg_0] \subseteq \hg\oplus\pg_0, \label{bracketrelations}\\
[\lgo,\mg_1]&\subseteq \hg\oplus\mg, & [\mg_0,\pg_0]&\subseteq \hg, & [\mg,\mg_1]&\subseteq \hg\oplus \mg. \nonumber \end{align} Since $[\hg,\pg_0]=0$ we may use Lemma \ref{lem_formulaRicci} to obtain \[
\langle \Ricci X, Y \rangle = \unc \sum_{r,s} \langle [U_r, U_s]_\pg, X\rangle \langle [U_r, U_s]_\pg, Y\rangle - \unm \tr S\left( \ad_\pg X\right) S\left( \ad_\pg Y\right), \] where $X,Y \in \pg_0$ and $\{U_r\}$ is any orthonormal basis for $\pg$. Assume from now on that $X\in \mg_0$, $Y\in \lgo$, and that $\{ U_r\}$ is the union of orthonormal basis for $\lgo, \mg_0$ and $\mg_1$. Noticing that by \eqref{bracketrelations} one has that $[\pg,\mg] \perp \lgo$ and that $\ad_\pg X$ only acts nontrivially on $\mg$, the above formula simplifies as \[
\langle \Ricci X, Y \rangle = \unc \sum_{U_r,U_s\in \lgo} \langle [U_r, U_s]_\pg, X\rangle \langle [U_r, U_s]_\pg, Y\rangle - \unm \tr S\left( \ad_\mg X\right) S\left( \ad_\mg Y\right). \] Choose an orthonormal basis $\{Y_i \}_{i=1}^3$ for $\lgo$, with $Y_i = A_i + B_i$, $A_i\in \slg_2(\RR)$, $B_i\in \mg_0$, and such that $\{A_i \}$ is a \emph{Milnor basis} for $\slg_2(\RR)$, with brackets \[
[A_2,A_3] = \alpha A_1, \qquad [A_3,A_1] = \beta A_2, \qquad [A_1, A_2] = \gamma A_3, \qquad\alpha,\beta,\gamma \neq 0. \] Also, choose an orthonormal basis $\{ X_j^0\}_{j=1}^d$ for $\mg_0$ so that $\tr S(\ad_\mg X_i^0) S(\ad_\mg X_j^0) = 0$ if $i\neq j$ (this is indeed possible since the application $(X,Y) \mapsto \tr S(\ad_\mg X)S(\ad_\mg Y)$ is a symmetric bilinear form on $\mg_0$). Then, a straightforward calculation shows that \[
\langle \Ricci X_j^0, Y_3 \rangle = -\langle B_3, X_j^0\rangle \left( \gamma^2 + \tr S(\ad_\mg X_j^0)^2 \right), \] and using the Einstein condition we conclude that $\langle B_3, X_j^0\rangle = 0$. Analogously, we obtain that $\langle B_i, X_j^0\rangle = 0,$ for all $i=1,2,3$, $j=1,\ldots,d$, thus $\langle \slg_2(\RR), \mg_0\rangle = 0$. Therefore, $\slg_2(\RR)\perp \mg$, and this implies that the metric is locally a Riemannian product. But this is a contradiction, since $\widetilde{\Sl_2(\RR)}$ does not admit any left-invariant Einstein metric. \end{proof}
Finally, we study the family $\left(\Sl_2(\RR) \times \Sl_2(\CC)\right)/\Delta_{p,q}\U(1)$. Unfortunately, we are not able to deal with the case where $p = q$. Notice though that this missing case represents just one homogeneous space from the above infinite family.
With respect to the inclusions $\hg_1:= \sog(2) \subseteq \slg_2(\RR), \hg_2 := \ug(1) \subseteq \sug(2) \subseteq \slg_2(\CC)$ we have that \[
\hg := \Delta_{p,q} \ug(1) \subseteq \hg_1 \oplus \hg_2 \subseteq \slg_2(\RR) \oplus \slg_2(\CC) =: \ggo. \] Given an $\Ad(H)$-invariant inner product on some reductive complement $\mg$, we extend it in the usual way to an $\Ad(H)$-invariant inner product $\ip$ on $\ggo$. By looking at the decomposition of the isotropy representation from Table \ref{tabla} in the case when $p\neq q$, \[
\mg = \qg_0^{(1)} \oplus \qg_1^{(2)} \oplus \pg_0^{(1)} \oplus \pg_1^{(2)} \oplus \pg_2^{(2)}, \qquad \qg_1 \simeq \pg_1 \not\simeq \pg_2, \] we see that the ideals $\slg_2(\RR), \slg_2(\CC)$ are orthogonal (notice that $\pg_2^{(2)}$, $\qg_1^{(2)} \oplus \pg_0^{(1)} \oplus \pg_1^{(2)}$ correspond to reductive complements for the homogeneous spaces $\Sl_2(\RR)/\SO(2)$, $\Sl_2(\CC)/\U(1)$, respectively, and $\hg \oplus \qg_0 = \hg_1 \oplus \hg_2$). This easily gives that $\ip$ is $\ad(\hg)$-invariant if and only if it is both $\ad(\hg_1)$- and $\ad(\hg_2)$-invariant. Thus, $\qg_0$ acts by skew-symmetric endomorphisms on $\ggo$, and by Lemma \ref{lem_formulaRicci} we have that \[
\Ricci(Y,Y) = \unc \sum_{i,j} \langle [X_i,X_j]_\mg, Y \rangle^2 \geq 0, \qquad Y\in \qg_0. \]
\begin{remark} It is worth pointing out that when $p\neq q$, all homogeneous metrics in the above homogeneous space can be approximated by \emph{strictly locally homogeneous metrics} (cf.~ \cite{Tric92}), namely, metrics which are locally homogeneous but are not locally isometric to any globally homogeneous manifold. This is simply done by considering irrational slopes approximating the rational slope $p/q$ of the given space. It was proved in \cite{Spiro} (see also \cite{Bhm15}) that strictly locally homogeneous metrics do not have non-positive Ricci curvature.
On the other hand, if $p=q$ there exist homogeneous metrics that cannot be approximated in that way. \end{remark}
\section{Non-unimodular transitive group}\label{sectionnonuni}
In this section we study Einstein homogeneous spaces $G/K$ of negative scalar curvature with $G$ non-unimodular and $G/K$ as in Theorem \ref{structure}. Following the discussion of Section~ \ref{prelimstruct} we can assume that \begin{equation}\label{decom} \ggo=(\ggo_1 + \zg(\ug)) \ltimes \ngo, \end{equation} where $\ug=\ggo_1 + \zg(\ug)$ is a reductive Lie algebra, $\ggo_1$ is semisimple with no compact ideals, $\kg \subset \ggo_1$ and $\zg(\ug)=\RR H ,$ with $H$ the mean curvature vector (see Corollary \ref{cor_rankone}).
Before starting the proof of our main result in the non-unimodular case, we state two lemmas which yield information about semisimple homogeneous spaces in low dimensions. Their proof follows immediately from Table \ref{tabla}, and the fact that irreducible symmetric spaces of the non-compact type are diffeomorphic to Euclidean spaces.
\begin{lemma}\label{diff5} Let $G_1/K$ be a simply-connected semisimple homogeneous space of the non-compact type\footnote{see Definition \ref{defsshomogspace}.} with $n = \dim G_1/K \leq 5.$ Then, either $G_1/K=\Sl_2(\CC)/\U(1)$ or $G_1/K \simeq \RR^n$. \end{lemma}
\begin{lemma}\label{diff6} Let $G_1/K$ be a $6$-dimensional simply-connected semisimple homogeneous space of the non-compact type such that $G_1$ contains $\widetilde{\Sl_2(\RR)}$ as a simple factor. Then, $G_1/K$ is diffeomorphic to~ $\RR^6.$ \end{lemma}
We now focus on the proof of Theorem \ref{mainnonuni}. Given a homogeneous Einstein space $(G/K,g)$ with negative scalar curvature and $G$ non-unimodular, we consider for it the decomposition given in \eqref{decom}. By virtue of Corollary \ref{reductionG1/K}, our goal will be to prove that $G_1/K$ is diffeomorphic to a Euclidean space. Observe that we may assume $\dim G_1/K \leq \dim G/K -3.$ Indeed, we always have $\dim \zg(\ug) =1$ and $\dim \ngo \geq \dim \zg(\ug)$ because the representation $\theta|_{\zg(\ug)} : \zg(\ug) \to \ngo$ is faithful (its kernel must be in the nilradical). If $\dim \ngo = 1$, we know by Theorem \ref{thm_lemadimn} and Remark~ \ref{rmkn1} that $(G/K, g)$ is a Riemannian product, and thus not de Rham irreducible. More generally, for this reason we can also assume that $\theta|_{\ggo_1} \neq 0.$ We now proceed with the proof, considering different cases according to the dimension of $G/K.$
\subsection{$\dim G/K \leq 8$}\label{dimG/Kleq8} $ $
We have that $\dim G_1/K \leq 5.$ By Lemma \ref{diff5}, either $G_1/K$ is diffeomorphic to $\RR^n$ for some $n \leq 5,$ or $G_1/K=\Sl_2(\CC)/\U(1).$ In the latter case, $\theta|_{\ggo_1}=0,$ since $\dim \ngo=2$ and there exists no nontrivial $2$-dimensional representation of the simple Lie algebra $\slg_2(\CC)$. Thus, this is a product case.
\subsection{$\dim G/K=9$} $ $
\subsubsection{$\dim G_1/K=6$}\label{G1/K=6} As $\dim \ngo=2$ we know that $\theta(\ggo_1)\subseteq \End(\RR^2)$ is semisimple, so $\ggo_1$ must have an ideal isomorphic to $\slg_2(\RR).$ We conclude that $G_1/K$ is diffeomorphic to $\RR^6$ by using Lemma \ref{diff6}.
\subsubsection{$\dim G_1/K \leq 5$} By Lemma \ref{diff5}, we only consider the case where $G_1/K=\Sl_2(\CC)/\U(1).$ We have that $\dim \ngo = 3$, and it is easy to see that $\theta|_{\ggo_1}=0$ since there is no subalgebra of $ \slg_3(\RR)$ isomorphic to $\slg_2(\CC)$. Hence this is also a product case.
\subsection{$\dim G/K=10$}
\subsubsection{$\dim G_1/K=7$} We have that $\dim \ngo=2$ and $\theta(\ggo_1)$ is semisimple, that is, $\ggo_1$ has an ideal isomorphic to $\slg_2(\RR)$. The list of all possible homogeneous spaces $G_1/K$ to consider is very long. However, if $G_1/K$ is a product of lower dimensional homogeneous spaces, then it must be a product of some irreducible symmetric spaces of non-compact type and some of the spaces in Table \ref{tabla}. Since its dimension is $7$, it easily follows that there is at most one factor which is non-symmetric, and Proposition \ref{prodRiem} implies that on $G_1/K$ the metric is a product of corresponding invariant metrics on each of the factors. Moreover, since the kernel of $\theta|_{\ggo_1}$ has codimension $3$, $\theta$ must necessarily vanish on some of these factors. This implies at once that the whole space $G/K$ splits as a Riemannian product.
By the preceeding discussion, we may now assume that $G_1/K$ is non-product, i.e.~ it is one of the spaces listed in Table \ref{tabla}. Since $\ggo_1$ has an ideal isomorphic to $\slg_2(\RR)$ there are actually only two possibilities: the simply connected covers of $\Sl_2(\RR)^3/\Delta T^2_{a,b,c}$ and of $\SU(2,1) \times \Sl_2(\RR)/\Delta_{p,q}\U(1)(\SU(2)\times \{e\})$. But both of them are diffeomorphic to $\RR^7$, hence we are done.
\subsubsection{$\dim G_1/K=6$}
Here $\dim \ngo=3$ and $\theta|_{\ggo_1} \neq 0.$ Since $\theta|_{\ggo_1}$ maps $\ggo_1$ into $\slg_3(\RR)$, for it to be non-trivial it is necessary that $\ggo_1$ contains at least one simple ideal of dimension at most $8$. By Lemma \ref{diff6}, we may further assume that $G_1$ contains no $\widetilde{\Sl_2(\RR)}$ factor. Thus it is clear from Table \ref{tabla} that if $G_1/K$ were a product of lower dimensional homogeneous spaces then each factor would be symmetric space, and the result would follow.
On the other hand, if $G_1/K$ is non-product, it also follows from Table \ref{tabla} that the only possibilities are \begin{align*} \SU(2,1)/T_{max}, \quad \Sl_2(\CC). \end{align*}
But then we must have $\theta|_{\ggo_1} = 0$, because there exist no nontrivial $3$-dimensional representations of the simple Lie algebras $\sug(2,1)$ or $\slg_2(\CC)$.
\subsubsection{$\dim G_1/K \leq 5$}\label{section514sl2C} By using Lemma \ref{diff5} we have that either $G_1/K \simeq \RR^n,$ for some $n \leq 5,$ or $G_1/K=\Sl_2(\CC)/\U(1).$ We need to analyze the latter case.
We are reduced to showing that equation \eqref{eqRicU/K} has no solutions for $\Sl_2(\CC)$-invariant metrics on $\Sl_2(\CC)/\U(1)$, for any $\theta: \ggo_1 \to \End(\RR^4)$ that satisfies \eqref{eqmmtheta}. We now use the notation of Section \ref{sectionsl2C}. Let us assume that $\theta(\ggo_1) \neq 0$, since otherwise we are in a product case, and consider an arbitrary $\Ad(\U(1))$-invariant inner product on $\slg_2(\CC)$ written in the form $\ip_h$ given in Lemma \ref{lemmasl2C}, $h\in \Gl_5(\RR)$. An orthonormal basis for the reductive complement $\mg$ is given by $\mathcal{B}_h = \{h^{-1} Y_0, \ldots, h^{-1} X_2 \}$. Up to equivalence of representations, there are two $4$-dimensional real faithful representations of $\slg_2(\CC)$: the tautological representation, and its conjugate, and they are both irreducible. Let us consider the case where $\theta$ is equivalent to the tautological representation (the other case is completely analogous). There exists $h_2 \in \Gl(\ngo)$ such that with respect to the inner product $h_2 \cdot \ip_\ngo$ the matrices of $\theta(Z)$, \ldots, $\theta(X_2)$ have the forms \eqref{matricessl2C} (after identifying a complex number $a+b i$ with a $2\times 2$ real matrix $\minimatrix{a}{b}{-b}{a}$). This is equivalent to saying that the matrices of $(h_2^{-1} \cdot \theta)(Z), \ldots, (h_2^{-1} \cdot \theta)(X_2)$ have such a form with respect to the inner product $\ip_\ngo$. But now an easy computation shows that \[
\sum_{Y \in \, \mathcal{B}_h} \left[\left(h_2^{-1} \cdot \theta\right)(Y), \left(h_2^{-1} \cdot \theta\right)(Y)^t \right] = 0, \] that is, $h_2^{-1}\cdot\, \theta$ is also a zero of the moment map for the natural $\Gl(\ngo)$-action on $\End(\slg_2(\CC),\End(\ngo))$ (see Remark \ref{remarks}, \eqref{remarktheta}). From the rigidity imposed by Geometric Invariant Theory for such zeros \cite[Theorem 4.3]{RS90}, we can conclude that in fact $h_2\in \Or\left(\ngo, \ip_\ngo\right)$, and thus the matrices of $\theta(Z), \ldots, \theta(X_2)$ have the form \eqref{matricessl2C} with respecto to $\ip_\ngo$ (see also the proof of \cite[Proposition A.1]{semialglow} for a more detailed application of this argument). We now plug this information into equation \eqref{eqRicU/K}, and since $\theta(X_2)$ is skew-symmetric, by looking at the Ricci curvature and using Lemma \ref{lemmasl2C} we obtain that $d=0$. In other words, $h$ is diagonal, and in particular the metric associated to $\ip_h$ leaves orthogonal the Cartan decomposition $\slg_2(\CC) = \kg \oplus \pg$, $\kg = \hg \oplus \qg_1$, $\pg = \pg_0 \oplus \pg_1$. Notice that the operator $C_\theta \in \End(\mg)$ given by \[
\left\langle C_\theta X, Y\right\rangle = \tr S \left( \theta(X)\right) S\left(\theta(Y)\right) \] is a positive multiple of the identity on $\pg$. Hence, by following the same arguments used in the proof of \cite[Theorem 1]{Nkn2} we can conclude that the equation \begin{equation}\label{eqRictheta}
\Ricci_{\ip} = c \, I + C_\theta \end{equation} can not be satisfied for $\ip_h$. Therefore, there are no Einstein metrics in this case.
\begin{remark} The fact that the proof of \cite[Theorem 1]{Nkn2} could be adapted for the more general equation \eqref{eqRicU/K} as long as $G_1$ is simple was kindly communicated to us by Jorge Lauret \cite{Lauretpersonalcom}. \end{remark}
\section{Strong Alekseevskii's conjecture}\label{strong}
This section is devoted to studying the strong Alekseevskii conjecture and showing that it holds up to dimension $8$, with the possible exceptions of invariant metrics on non-compact semisimple Lie groups or on the space $\left(\Sl_2(\RR)\times \Sl_2(\CC)\right)/\Delta\U(1)$.
As in the previous section, we consider Einstein homogeneous spaces $G/K$ with $G$ chosen as in Theorem~ \ref{structure}, which are not Riemannian products. We also assume the simply-connected hypothesis, which is non-restrictive since it suffices to prove the strong Alekseevskii conjecture in the simply-connected case (see Remark \ref{remarks} (e) and \cite{AC99, Jab15}). Regarding the semisimple case, according to Theorem \ref{thmsemisimple} all semisimple homogeneous spaces in Table \ref{tabla} are either Lie groups, or the space $\left(\Sl_2(\RR)\times \Sl_2(\CC)\right)/\Delta\U(1)$, or they do not admit an Einstein metric. The remaining semisimple homogeneous spaces are symmetric spaces, and it is well-known that they are isometric to solvmanifolds. Therefore, by \cite{Dtt88}, in the following we will only study the cases where the transitive group is non-unimodular (see Remark \ref{remarks}, \eqref{rmkDotti}). We proceed case by case, according to the dimension of $G/K.$ Our goal will be to show that $G_1/K$ is isometric a solvmanifold.
\subsection{$\dim G/K=6$}\label{strong6} $ $
These spaces were analyzed by Jablonski and Petersen in \cite[\S 4]{JblPtr}.
\subsection{$\dim G/K=7$} $ $
First we state the following lemma, which follows easily from Table \ref{tabla} and the well-known fact that irreducible symmetric spaces of the non-compact type are isometric to solvmanifolds.
\begin{lemma}\label{solv5} Let $(G_1/K, g)$ be a simply-connected semisimple homogeneous space of the non-compact type with a $G_1$-invariant metric $g$, and $\dim G_1/K \leq 5.$ Then, either $(G_1/K,g)$ is isometric to a solvmanifold, or $G_1/K$ is one of the following spaces \begin{gather*}
\widetilde{\Sl_2(\RR)}, \quad \Sl_2(\CC)/\U(1),\quad \left(\Sl_2(\RR)\times \Sl_2(\RR)\right)/\Delta_{p,q} \SO(2), \\
\SU(2,1)/\SU(2), \quad \Sl_2(\RR) \times \Sl_2(\RR)/ \SO(2). \end{gather*} \end{lemma}
By using Theorem \ref{thm_lemadimn}, Remark \ref{rmkn1} and Corollary \ref{cor_rankone}, we can assume that $\dim G_1/K \leq 4,$ $\dim\zg(\ug)=1$ and $\dim \ngo \geq 2.$ We divide into cases according to the dimension of $G_1/K.$
\subsubsection{$\dim G_1/K=4$} By Lemma \ref{solv5} we know that $G_1/K$ is a solvmanifold.
\subsubsection{$\dim G_1/K=3$}\label{G/K7G_1/K3} The only case in which $G_1/K$ is not a solvmanifold is when $G_1/K=\widetilde{\Sl_2(\RR)}.$
The existence of an example in this case would imply that there is a $6$-dimensional unimodular expanding algebraic soliton, by using that non-unimodular Einstein spaces are one-dimensional extensions of unimodular algebraic solitons (see \cite[\S 6]{alek}). Since for that soliton one would have $\ug=\slg_2(\RR)$, we arrive at a contradiction by using \cite[Appendix]{semialglow}.
\subsection{$\dim G/K=8$} $ $
We assume that $\dim G_1/K \leq 5$ and $\dim \ngo \geq 2.$
\subsubsection{$\dim G_1/K=5$} We have that $\dim \ngo=2$. Then, by using Lemma \ref{solv5}, we consider the following possibilities to $G_1/K:$ \[ \Sl_2(\CC)/\U(1), \quad \SU(2,1)/\SU(2), \quad \left(\Sl_2(\RR)\times \Sl_2(\RR)\right)/\Delta_{p,q}\SO(2), \quad \Sl_2(\RR) \times \Sl_2(\RR)/ \SO(2). \]
In the first two cases, we must have $\theta|_{\ggo_1} = 0$ since there exist no nontrivial $2$-dimensional representations of the simple Lie algebras $\slg_2(\CC)$ and $\sug(2,1).$ By Theorem \ref{thm_lemadimn} and Remark \ref{rmkn1}, these are product cases. In the latter case, we also are in a product case. Indeed, the metric restricted to $G_1/K$ is a product metric. In addition, since $\dim \ngo=2,$ $\theta$ must necessarily vanish on some of these factors. This implies at once that the whole space $G/K$ splits as a Riemannian product. We now deal with the case $G_1/K =\left(\Sl_2(\RR)\times \Sl_2(\RR)\right)/\Delta_{p,q}\SO(2)$.
This case is similar in nature to the one in Section \ref{section514sl2C}: we are reduced to solving equation~ \eqref{eqRicU/K} for invariant metrics on $G_1/K$, for every possible representation $\theta: \ggo_1 = \slg_2(\RR)\oplus \slg_2(\RR) \to \End(\RR^2)$. Let $H = \minimatrix{1}{0}{0}{-1}$, $X = \minimatrix{0}{1}{-1}{0}$, $Y = \minimatrix{0}{1}{1}{0}$ be a basis for $\slg_2(\RR)$, and consider the ordered basis $\mathcal{B}$ for $\slg_2(\RR) \oplus \slg_2(\RR)$ given by \begin{align*}
Z = \left(p \,H, q\, H\right), \quad X_0 &= \left(q \,H, -p\, H\right), \quad Y_1 = \left( X, 0\right), \\
Y_2 = \left( Y,0\right), \quad X_1 &= \left( 0, X\right), \quad X_2 = \left( 0, Y\right). \end{align*} The isotropy subalgebra is $\hg_{p,q} = \RR Z$, $\mg = \operatorname{span_\RR}\{X_0, Y_1, Y_2, X_1, X_2 \}$ is a reductive complement, and the decomposition into irreducible submodules is given by $\mg = \qg_0 \oplus \pg_1 \oplus \pg_2$, where $\qg_0 = \RR X_0$, $\pg_1 = \RR Y_1 \oplus \RR Y_2$, $\pg_2 = \RR X_1 \oplus \RR X_2$, and $\pg_1 \simeq \pg_2$ if and only if $p=q$.
First notice that $\theta$ must have a kernel, which we may assume without loss of generality to be the first $\slg_2(\RR)$ factor. This implies that $\theta(H,0) = 0$, and since $\theta(Z)$ is skew-symmetric, it is clear that $\theta(X_0)$ must also be skew-symmetric. Thus, using equation \eqref{eqRicU/K} we obtain that \[
\Ricci_{G_1/K}(X_0,X_0) < 0. \] This is already enough to rule out the cases $p\neq q$, since in those cases we have $\pg_1 \not\simeq \pg_2$, which forces $X_0$ to act skew-symmetrically on $\ggo_1$, and by Lemma \ref{lem_formulaRicci} we get a contradiction.
Let us now consider the remaining case $p = q = 1$, which is considerably more difficult. The following is the analogous of Lemma \ref{lemmasl2C} for this situation, and can be proved in the very same way.
\begin{lemma} Up to isometry, $\Sl_2(\RR)\times \Sl_2(\RR)$-invariant metrics on $\left(\Sl_2(\RR)\times \Sl_2(\RR)\right) / \Delta_{1,1} \SO(2)$ can be parameterized by $\Ad(\SO(2))$-invariant inner products on $\mg$ of the form \[
\langle \cdot, \cdot \rangle_h = \langle h\, \cdot\, , h\, \cdot \rangle_0, \] where $h\in \Gl_5(\RR)$ is given by \[
h = \left[\begin{array}{ccccc} e &0 &0 &0 & 0\\ 0& a &0 &0 &0 \\0 & 0& a & 0&0 \\0 &d & 0& b &0 \\ 0&0 &d &0 & b \end{array}\right], \qquad a,b,d,e \in \RR, \quad a,b,e\neq 0. \] \end{lemma}
Reasoning as in Section \ref{section514sl2C}, we see that condition \eqref{eqmmtheta} implies that $\theta$ restricted to the second $\slg_2(\RR)$ factor is nothing but the tautological representation of this Lie algebra. Let us consider the operator $\Ricci^\theta \in \End(\mg)$ given by \[
\left\langle \Ricci^\theta \, X, Y\right\rangle_h = \Ricci_{G_1/K}(X,Y) - \tr S(\theta(X)) S(\theta(Y)). \] Equation \eqref{eqRicU/K} for a metric $\ip_h$ can be rephrased as \begin{equation}\label{eqRiccithetaop}
\Ricci^\theta = c I, \qquad c<0. \end{equation}
Since we now know $\theta$ and $\ip_h$ explicitly, we can actually compute $\Ricci^\theta$ in terms of $a,b,d,e$. Let us call $r_{i,j}^\theta$, $1\leq i,j \leq 5$, the entries of the matrix of $\Ricci^\theta$ with respect to the $\ip_h$-orthonormal ordered basis $\mathcal{B}_h = \left\{ h^{-1} Y \mid Y\in \mathcal{B} \right\}$ . Then, assuming that $\det h = 1$, we have that \begin{align*}
r_{1,1}^\theta &= \unm \left( a^4 e^4 + \left(b^2 - d^2\right)^2 \right) + a^2 d^2 \left( e^2- 4b^2\right) \left( e^2 + 4 b^2\right), \\
r_{1,1}^\theta + 2\, r_{4,4}^\theta + 2\, a \cdot d^{-1} \, r_{2,4}^\theta & = \unm \left(a^2 - b^2 + d^2 \right)^2 e^4 + 4 a^4 b^2 \left( 4 b^2 - e^2 \right). \end{align*} Despite the ugliness of these formulas, we see that all the terms and factors on the right hand side are positive except for $ e^2 - 4 b^2 $, which appears with a different sign in both of them. For a solution of \eqref{eqRiccithetaop}, both expressions should be negative (they would equal $c$ and $3\, c$, respectively). It is now clear that such a solution does not exist.
\subsubsection{$\dim G_1/K=4$} By Lemma \ref{solv5} we know that $G_1/K$ is isometric to a solvmanifold.
\subsubsection{$\dim G_1/K=3$} Here, the only case to consider is $G_1/K=\widetilde{\Sl_2(\RR)}.$
Similarly to the case in Section~ \ref{G/K7G_1/K3}, we have a contradiction.
\end{document} | arXiv |
\begin{document}
\allowdisplaybreaks
\newcommand{1911.00118}{1911.00118}
\renewcommand{\arabic{footnote}}{}
\renewcommand{016}{016}
\FirstPageHeading
\ShortArticleName{Intersections of Hypersurfaces and Ring of Conditions of a Spherical Homogeneous Space}
\ArticleName{Intersections of Hypersurfaces and Ring\\ of Conditions of a Spherical Homogeneous Space\footnote{This paper is a~contribution to the Special Issue on Algebra, Topology, and Dynamics in Interaction in honor of Dmitry Fuchs. The full collection is available at \href{https://www.emis.de/journals/SIGMA/Fuchs.html}{https://www.emis.de/journals/SIGMA/Fuchs.html}}}
\Author{Kiumars KAVEH~$^\dag$ and Askold G.~KHOVANSKII~$^{\ddag\S}$}
\AuthorNameForHeading{K.~Kaveh and A.G.~Khovanskii}
\Address{$^\dag$~Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA} \EmailD{\href{mailto:[email protected]}{[email protected]}}
\Address{$^\ddag$~Department of Mathematics, University of Toronto, Toronto, Canada} \EmailD{\href{mailto:[email protected]}{[email protected]}}
\Address{$^\S$~Moscow Independent University, Moscow, Russia}
\ArticleDates{Received November 04, 2019, in final form March 14, 2020; Published online March 20, 2020}
\Abstract{We prove a version of the BKK theorem for the ring of conditions of a spherical homogeneous space~$G/H$. We also introduce the notion of ring of complete intersections, firstly for a spherical homogeneous space and secondly for an arbitrary variety. Similarly to the ring of conditions of the torus, the ring of complete intersections of~$G/H$ admits a~description in terms of volumes of polytopes.}
\Keywords{BKK theorem; spherical variety; Newton--Okounkov polytope; ring of conditions}
\Classification{14M27; 14M25; 14M10}
\renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0}
\section{Introduction} Let $\Gamma_1,\ldots,\Gamma_n$ be a set of $n$ hypersurfaces in the $n$-dimensional complex torus $(\mathbb C^*)^n$ defined by equations $P_1=0,\ldots, P_n = 0$ where $P_1,\ldots,P_n$ are generic Laurent polynomials with given Newton polyhedra $\Delta_1,\ldots,\Delta_n$. The Bernstein--Koushnirenko--Khovanskii theorem (or BKK theorem, see \cite{Bernstein, Khovanskii-genus, Koushnirenko}) claims that the intersection number of the hypersurfaces $\Gamma_1,\ldots,\Gamma_n$ is equal to the mixed volume of $\Delta_1, \ldots, \Delta_n$ multiplied by $n!$. In particular when $\Delta_1=\cdots=\Delta_n=\Delta$ this theo\-rem can be regarded as giving a formula for the degree of an ample divisor class in a~projective toric variety.
Spherical varieties are a generalization of toric varieties to actions of reductive groups. Let~$G$ be a~connected reductive algebraic group. A normal variety $X$ with an action of $G$ is called {\it spherical} if a Borel subgroup of $G$ has an open orbit. When $G$ is a torus we get the definition of a toric variety. On the other hand, spherical varieties are generalizations of flag varieties $G/P$ as well. Similarly to toric varieties, the geometry of spherical varieties and their orbit structure can be read off from combinatorial and convex geometric data of fans and convex polytopes.
The BKK theorem has been generalized to spherical varieties by B. Kazarnovskii, M. Brion and the authors of the present paper. In particular, this generalization provides a formula for the degree of an ample $G$-linearized line bundle $L$ on a projective spherical variety $X$ (see \cite{Brion, Kazarnovskii, KKh-reductive}). The formula is given as the integral of a certain explicit function over the moment polytope (also called the Kirwan polytope) of $(X, L)$. This formula can be equivalently expressed as volume of a larger polytope associated to $(X, L)$. It is usually referred to as a Newton--Okounkov polytope of $(X, L)$ (see Section \ref{sec-NO-polytope-sph-var}).
In \cite{KKh-Annals, LM}, the BKK theorem has been generalized to intersection numbers of generic members of linear systems on any irreducible algebraic variety. The Newton--Okounkov bodies play a~key role in this generalization. All generalizations mentioned above deal with the intersection numbers of hypersurfaces in an algebraic variety.
Following the original ideas of Schubert, in the early 1980s De Concini and Procesi deve\-lo\-ped an intersection theory for algebraic cycles (which are not necessarily hypersurfaces) in a~symmetric homogeneous space $X = G/H$ (see~\cite{DP}). Their intersection theory, named the {\it ring of conditions of~$X$}, can be automatically generalized to a spherical homogeneous space~$X$.
For an algebraic torus $(\C^*)^n$ and more generally a horospherical homogeneous space $X$, nice descriptions of the ring of conditions are known. Moreover, combinatorial descriptions of cohomology rings as well as rings of conditions of interesting subclasses of spherical varieties have been obtained via equviariant cohomology methods by several authors (see \cite{Bifet-DeConcini-Procesi,Littelmann-Procesi, Strickland}). In our opinion, it is unlikely that for an arbitrary spherical homogeneous space, one can find such nice and transparent descriptions for the ring of conditions in terms of combinatorial data.
The contributions of the present paper are the following: \begin{itemize}\itemsep=0pt \item[(1)] We give a version of the BKK theorem in the ring of conditions of a spherical homogeneous space $X$, namely a formula for the intersection numbers of hypersurfaces in the ring of conditions in terms of volumes of polytopes (Theorem~\ref{th-main}). This modified version is as transparent as the original BKK theorem. \item[(2)] Along the way, we introduce the notion of ring of complete intersections of a spherical homogeneous space. Unlike the ring of conditions, this ring admits a~nice description in terms of volumes of polytopes (see Theorem~\ref{th-ring-complete-intersec-vol-poly}, also cf.~\cite{Kaveh-note}). In the case when $X$ is a~torus~$(\C^*)^n$, the ring of complete intersections is isomorphic to the ring of conditions. \item[(3)] We also introduce the ring of complete intersections for any irreducible algebraic variety (not necessarily equipped with a group action) and give a description of this ring in terms of volumes of convex bodies (Section~\ref{sec-ring-complete-intersec-general}). \end{itemize}
We would like to point out that there is a big difference between the construction/definition of the ring of complete intersections for a spherical homogeneous space (as in~(2) above) and the construction/definition of the ring of complete intersections for an arbitrary variety (as in~(3) above). The first ring is defined similarly to the ring of conditions except that in the definition of a cycle one should be more careful and consider (strongly) transversal hypersurfaces, The second ring is much larger and has a very different nature. It is defined by a general algebra construction that associates a (Poincar\'e duality) algebra to a vector space equipped with an $n$-linear form (Section~\ref{sec-alg-construction}). Nevertheless, the ring of complete intersections of a spherical homogeneous space can be described using the same general algebra construction.
\section{Preliminaries on spherical varieties} In the rest of the paper we will use the following notation about reductive groups: \begin{itemize}\itemsep=0pt \item[--] $G$ denotes a connected complex reductive algebraic group. \item[--] $B$ a Borel subgroup of $G$ and $U$, $T$ the maximal unipotent subgroup and a maximal torus contained in $B$ respectively. \item[--] $G/H$ denotes a spherical homogeneous space, we let $\dim(G/H) = n$. \end{itemize}
We recall some basic background material about spherical varieties. A normal $G$-variety $X$ is called \emph{spherical} if a Borel subgroup (and hence any Borel subgroup) has a dense orbit. If~$X$ is spherical, it has a finite number of $G$-orbits as well as a finite number of $B$-orbits. Spherical varieties are a generalization of toric varieties for actions of reductive groups. Analogous to toric varieties, the geometry of spherical varieties can be read off from associated convex polytopes and convex cones. For an overview of the theory of spherical varieties, we refer the reader to~\cite{Perrin}.
It is a well-known fact that if ${\mathcal L}$ is a $G$-linearized line bundle on a spherical variety, then the space of sections $H^0(X, {\mathcal L})$ is a multiplicity free $G$-module. For a quasi-projective $G$-variety $X$, this is equivalent to~$X$ being spherical.
Some important examples of spherical varieites and spherical homogeneous spaces are the following: \begin{itemize}\itemsep=0pt \item[(1)] When $G$ is a torus, the spherical $G$-varieties are exactly toric varieties. \item[(2)] By the Bruhat decomposition the flag variety $G/B$ and the partial flag varieties $G/P$ are spherical $G$-varieties.
\item[(3)] Let $G \times G$ act on $G$ from left and right. Then the stabilizer of the identity is $G_{{\rm diag}} = \{(g, g) \,|\, g \in G\}$. Thus $G$ can be identified with the homogeneous space $(G \times G) / G_{{\rm diag}}$. Also by the Bruhat decomposition, this is a spherical $(G \times G)$-homogeneous space. \item[(4)] Let $\mathcal{Q}$ be the set of all smooth quadrics in $\p^n$. The group $G = {\rm PGL}(n+1,\C)$ acts transitively on $\mathcal{Q}$. The stabilizer of the quadric $x_0^2 + \cdots + x_n^2 = 0$ (in the homogeneous coordinates) is $H = {\rm PO}(n+1, \C)$ and hence $\mathcal{Q}$ can be identified with the homogeneous space ${\rm PGL}(n+1, \C) / {\rm PO}(n+1, \C)$. The subgroup ${\rm PO}(n+1,\C)$ is the fixed point set of the involution $g \mapsto (g^t)^{-1}$ of $G$ and hence $\mathcal{Q}$ is a symmetric homogeneous space. In particular, $\mathcal{Q}$ is spherical. The homogeneous space $\mathcal{Q}$ plays an important role in classical enumerative geometry (see~\cite{DP}). \end{itemize}
Throughout the rest of the paper we will fix a spherical homogeneous space $G/H$.
\section{Good compactification theorem} \label{sec-good- compactification} The ring of conditions of $X=G/H$ is a version of the Chow ring for a (usually not complete) spherical homogenous space~$X$. The existence of a good compactification plays a crucial role in this intersection theory. One can define the ring of conditions more geometrically considering algebraic cycles as stratified analytic varieties and using cohomology rings instead of Chow rings. In this section we recall preliminaries on transversal intersections of stratified varieties and will state the theorem on existence of a good compactification.
Let $Y$ be an algebraic variety. A {\it stratification} of $Y$ is a decomposition of $Y$ into a disjoint union of smooth algebraic subvarieties (possibly of different dimensions). Each algebraic variety~$Y$ admits the following canonical stratification: Let $Y=Y_0\supset Y_1\supset\dots \supset \varnothing$ be the decreasing set of subvarieties where $Y_{i+1}$ is the set of singular points in $Y_i$. The {\it canonical stratification} of~$Y$ is the partition $Y=\bigcup_i Y_i^0$ where $Y_i^0=Y_i\setminus Y_{i+1}.$
Two subvarieties $Y,Z$ of an ambient smooth variety $X$ are {\it transversal} if there are stratifications $Y=\bigcup_i Y_i^0$, $Z=\bigcup Z_j^0$ such that any pair $Y_i^0$, $Z_j^0$ of smooth subvarieties are transversal in~$X$. Similarly one can define transversality of several subvarieties.
Let $X = G/H$ be a homogeneous space and $Y$, $Z$ smooth subvarieties. The Kleiman transversality theorem (which is a version of the famous Thom transversality theorem) states that: \emph{for almost all $g\in G$ the varieties $Y$ and $g \cdot Z$ are transversal in $X$, i.e., the subset $G^0\subset G$ such that the subvarieties $Y$ and $gZ$ are transversal in $X$ contains a nonempty Zariski open in $G$}.
The next important result is due to De Concini and Procesi~\cite{DP}. They considered the case when $G/H$ is a symmetric homogeneous space but their proof, more or less without change, works for a spherical homogeneous space as well. \begin{Th}[existence of good compactification] \label{th-exist-good-comp} Let $X=G/H$ be a spherical homogeneous space. Let $Y \subset X$ be a subvariety. Then there exists a complete $G$-variety $M$ that contains $X$ as its open $G$-orbit and for each $G$-orbit $O\subset M$ intersecting $\overline{Y} \subset M$ we have $\codim(\overline{Y}\cap O) = \codim(O) + \codim(Y)$. \end{Th}
Let us say that two subvarieties $Y$, $Z$ of $X$ are {\it strongly transversal} in $X$ if there is a complete $G$-variety $M\supset X$ such that: (1) $M$ is a good compactification for $Y$ and $Z$ and (2) for each orbit $O\subset M$ the intersections $\overline Y\cap O$ and $\overline Z\cap O$ of their closures $\overline Y$ and $\overline Z$ with $O$ are transversal in $O$. Similarly, one defines strong transversality of several subvarieties.
Using Kleiman's theorem and the good compactification theorem one can show for any subvarieties $Y,Z\subset X$ there is a Zariski open subset $G_0\subset G$ such that, for any $g \in G_0$, the varieties~$Y$ and $g \cdot Z$ are strongly transversal in~$X$.
\section[Ring of conditions of $G/H$]{Ring of conditions of $\boldsymbol{G/H}$} \label{sec-ring-of-conditions}
In this section we consider a variant of intersection theory for a spherical homogeneous space $G/H$ (which is often a non-complete variety) called the {\it ring of conditions} $\mathcal{R}(G/H)$. Similar to the Chow ring, the elements of $\mathcal{R}(G/H)$ are formal linear combinations of subvarieties in $G/H$ but considered up to a different and stronger equivalence. The definition of ring of conditions goes back to De Concini and Procesi in their fundamental paper \cite{DP}. They introduced it as a~natural ring in which one can study many classical problems from enumerative geometry (this is related to Hilbert's Fifteenth problem). They also showed that~$\mathcal{R}(G/H)$ can be realized as a~direct limit of Chow rings of all smooth equivariant compactifications of~$G/H$.
Consider the set $\mathcal{C}$ of {\it algebraic cycles} in~$G/H$. That is, every element of $\mathcal{C}$ is a formal linear combination $V = \sum_i a_i V_i$ with $a_i \in \Z$ and $V_i$ irreducible subvarieties. Clearly, with the formal addition operation of cycles, $\mathcal{C}$ is an abelian group. If all the subvarieties $V_i$ in $V$ have the same dimension $k$ we say that~$V$ is a~$k$-cycle. For $0 \leq k \leq n$, the subgroup of $k$-cycles is denoted by~$\mathcal{C}_k$. For a cycle $V = \sum_i a_i V_i$ and $g \in G$ we define $g \cdot V$ to be $\sum_i a_i(g \cdot V_i)$. A $0$-cycle is just a~formal linear combination of points. If $P = \sum_i a_i P_i$ is a $0$-cycle where the $P_i$ are points, we let $|P| = \sum_i a_i$.
Let $Y$ and $Z$ be strongly transversal irreducible subvarieties, and let $Y \cap Z$ be a union of irreducible components $T$. We then define the {\it intersection product} $Y \cdot Z$ to be the cycle \[ Y \cdot Z= \sum_T T. \] By linearity the intersection product can be extended to algebraic cycles $Y = \sum_i a_i Y_i$ and $Z = \sum_j b_j Z_j$ such that for all~$i$,~$j$ varieties $Y_i$ and~$Z_j$ are strongly transversal to each other.
We now define an equivalence relation $\sim$ on the set of algebraic cycles as follows. Let $V=\sum_i a_i V_i$, $V'=\sum_j b_j V'_j \in \mathcal{C}_k$ be algebraic cycles of dimension $k$. Let $Z$ be an irreducible sub\-va\-riety of complementary dimension $n - k$. One knows that for generic $g \in G$, the subvariety~$g \cdot Z$ intersects all the $V_i$ and $V'_j$ in a strongly transversal fashion. Thus, for generic $g \in G$, the intersection products $V \cdot (g \cdot Z)$ and $V' \cdot (g \cdot Z)$ are defined and are $0$-cycles. We say that $V \sim V'$ if for any $Z$ as above and for generic $g \in G$ we have: \begin{gather}
|V \cap (g \cdot Z)| = |V' \cap (g \cdot Z)|.
\end{gather} That is, $V \sim V'$ if they intersect general translates of any subvariety of complementary dimension at the same number of points.
One verifies using good compactifications that if $Y_1 \sim Y_2$ and $Z_1 \sim Z_2$ then for generic $g\in G$ the intersection products $Y_1 \cdot gZ_1 $ and $Y_2 \cdot gZ_2$ are equivalent to each other. Thus the intersection product of strongly transversal subvarieties induces an intersection operation on the quotient~$\mathcal{C} /{\sim}$.
\begin{Def} \label{def-ring-of-conditions} The {\it ring of conditions of $G/H$} is $\mathcal{C} /{\sim}$ with the ring structure coming from addition and intersection product of cycles. \end{Def}
The following example, due to De Concini and Procesi, shows the assumption that $G/H$ is spherical is important and the ring of conditions is not well-defined for all homogeneous spaces. \begin{Ex} \label{ex-ring-of-cond-not-well-def}Take the $3$-dimensional affine space $\C^3$ regarded as an additive group. We would like to show that the ring of conditions of $\C^3$ does not exist. By contradiction suppose that it exists. Consider the surface (quadric) $S$ in $\C^3$ defined by the equation $y = zx$. The intersection of a horizontal plane $z=a$ and $S$ is the line $y=ax$. Thus for almost all $a\in \C$ the lines $y=ax$, $z=a$ must be equivalent in the ring of conditions of $\C^3$. On the other hand we claim that two skew lines $\ell_1$ and $\ell_2$ cannot be equivalent. This is because one can find a $2$-dimensional plane~$P$ such that any translate of~$P$ intersects~$\ell_1$ but no translate of~$P$ intersects~$\ell_2$ unless it contains~$\ell_2$. The contradiction shows that the ring of conditions of~$\C^3$ is not well-defined. \end{Ex}
\section[Ring of complete intersections of $G/H$]{Ring of complete intersections of $\boldsymbol{G/H}$} \label{sec-ring-of-complete-intersec}
In this section we propose an analogue of the ring of conditions constructed using only (non-degenerate) complete intersections in a spherical homogeneous space~$G/H$.
\begin{Def} \label{def-non-degenerate-complete-intersection} A {\it non-degenerate complete intersection} in $G/H$ is an intersection of several strongly transversal hypersurfaces $\Gamma_1,\ldots, \Gamma_k\subset G/H$. \end{Def}
Consider the collection $\mathcal{C}'_k$ of all $k$-dimensional {\it complete intersection cycles}, that is, formal linear combinations $\sum_i a_i V_i$ where each $V_i$ is a $k$-dimensional non-degenerate complete intersection in $G/H$. We let $\mathcal{C}' = \bigoplus_{i=0}^n \mathcal{C}'_k$.
As in the construction of ring of conditions (Section \ref{sec-ring-of-conditions}) we define an equivalence relation $\sim$ on $\mathcal{C}'$ as follows: let $V$, $V' \in \mathcal{C}'_k$ be complete intersection $k$-cycles. We say that $V \sim V'$ if for any complete intersection $(n-k)$-cycle $Z$ and generic $g \in G$ we have
$|V \cap (g \cdot Z)| = |V' \cap (g \cdot Z)|$ (recall $n = \dim(G/H)$). Note that in defining $\sim$ we only use $Z$ that are complete intersections.
\begin{Def} \label{def-ring-of-complete-intersections} The {\it ring of complete intersections $\mathcal{R}'(G/H)$} is $\mathcal{C}' /{\sim}$ with the ring structure coming from addition and intersection product of complete intersection cycles. \end{Def}
In the same fashion as for the ring of conditions, it can be verified that the ring $\mathcal{R}'(G/H)$ is well-defined. The inclusion $\mathcal{C}' \subset \mathcal{C}$ induces a ring homomorphism $\mathcal{R}'(G/H) \to \mathcal{R}(G/H)$. In general, this homomorphism is neither injective nor surjective.
In Section \ref{sec-descp-ring-comp-int} we will give a description of $\mathcal{R}'(G/H)$ in terms of volumes of polytopes.
\section{Newton--Okounkov polytopes for spherical varieties} \label{sec-NO-polytope-sph-var} In this section we recall the notion of a Newton--Okounkov polytope associated to a $G$-linear system on a spherical homogeneous space $G/H$. The volume of this polytope gives the self-intersection number of the linear system.
Let $X$ be an $n$-dimensional projective spherical $G$-variety with a $G$-linearized very ample line bundle ${\mathcal L}$. One associates a convex polytope $\Delta(X, {\mathcal L})$ to $(X, {\mathcal L})$. The construction depends on the combinatorial choice of a {\it reduced word decomposition} $\w$ for the longest element $w_0$ in the Weyl group of~$G$.
A main property of the Newton--Okounkov polytope $\Delta(X, {\mathcal L})$ is that its volume gives a~formula for the self-intersection number of divisor class of ${\mathcal L}$. Namely, \begin{gather} \label{equ-deg} c_1({\mathcal L})^n = n! \vol_n(\Delta(X, {\mathcal L})). \end{gather} The formula \eqref{equ-deg} is equivalent to the Brion--Kazarnovskii formula for the degree of a projective spherical variety (see \cite[Theorem~2.5]{Kaveh-note} as well as \cite{Kazarnovskii} and \cite{Brion}).
\begin{Rem} \label{rem-Brion-Kaz-original-statement} The original versions of \eqref{equ-deg} in \cite{Kazarnovskii} and \cite{Brion}, do not express the answer as volume of a Newton--Okounkov polytope but rather as integral of a certain function over the {\it moment polytope} $\mu(X, {\mathcal L})$. The construction of $\Delta(X, {\mathcal L})$ appears in \cite{Okounkov-sph} and \cite{AB} motivated by a question of the second author. \end{Rem}
\begin{Rem} \label{rem-not-normal} If $X$ is not normal the polytope $\Delta(X, {\mathcal L})$ can still be defined using the integral closure of the ring of sections of ${\mathcal L}$ (in its field of fractions), see \cite[Section~6.2]{KKh-reductive}. \end{Rem}
More generally, let $X$ be a not-necessarily-projective spherical variety and let $E$ be a $G$-linear system on~$X$, that is, $E$ is a finite-dimensional $G$-invariant subspace of $H^0(X, {\mathcal L})$ for a~$G$-linearized line bundle ${\mathcal L}$ on~$X$. Extending the notion of intersection number of divisors on complete varieties, one can talk about intersection number of linear systems. Let $E_1, \ldots, E_n$ be linear systems on $X$. The intersection index $[E_1, \ldots , E_n]$ is defined to be the number of solutions in~$X$ of a generic system of equations $f_1(x) = \cdots = f_n(x) = 0$, where $f_i \in E_i$. When counting the solutions, we ignore the solutions $x$ at which all the sections in some $E_i$ vanish. An important property of the intersection index is multi-additivity with respect to product of linear systems. In \cite[Section~6]{KKh-reductive} the authors define the Newton--Okounkov polytope $\Delta(E)$ and prove the following.
\begin{Prop} \label{prop-Brion-Kaz-linear-system} The intersection index $[E, \ldots, E]$ is equal to $n! \vol_n(\Delta(E))$. \end{Prop}
From the polarization formula in linear algebra the following readliy follows. \begin{Cor} \label{prop-int-number-several-divisors} Let ${\mathcal L}_1, \ldots, {\mathcal L}_n$ be $G$-linearized very ample line bundles on a spherical variety~$X$ and take $G$-invariant linear systems $E_i \subset H^0(X, {\mathcal L}_i)$. For any subset $I \subset \{1, \ldots, n\}$ let $\Delta_I = \Delta\big(\prod_{i \in I} E_i\big)$. We have the following formula for the intersection index of the~$E_i$: \begin{gather*} (-1)^n [E_1, \ldots, E_n] = - \sum_{i} \vol_n(\Delta_i) + \sum_{i < j} \vol_n(\Delta_{i,j}) + \cdots + (-1)^n \vol_n(\Delta_{1, \ldots, n}). \end{gather*} \end{Cor}
\begin{Rem} \label{rem-NO-body}The above notion of the Newton--Okounkov polytope of a $G$-linear system over a spherical variety is a special case of the more general notion of a~{\it Newton--Okounkov body} of a~graded linear system on an arbitrary variety (see \cite{KKh-Annals, LM} and the references therein). \end{Rem}
Suppose $E$ is a very ample $G$-linear system on $X$. That is, the Kodaira map of $E$ gives an embedding of $X$ into the dual projective space $\p(E^*)$. Let $Y_E$ denote the closure of the image of $X$ in $\p(E^*)$ and let ${\mathcal L}$ be the $G$-linearized line bundle on $Y_E$ induced by $\mathcal{O}(1)$ on $\p(E^*)$. One then has $\Delta(E) = \Delta(Y_E, {\mathcal L})$. We note that in general~$Y_E$ may not be normal, nevertheless one can still define the polytope $\Delta(Y_E, {\mathcal L})$ (see Remark~\ref{rem-not-normal}).
One can show that the map $E \mapsto \Delta(E)$ is piecewise additive in the following sense. Let $C$ be a~rational polyhedral cone generated by a finite number of $G$-linear systems $E_1, \ldots, E_s$. Then the there is a rational polyhedral cone $\tilde{C}$ (living in some appropriate Euclidean space) and a~linear projection $\pi\colon \tilde{C} \to C$ such that for each $G$-linear system $E \in C$ we have $\Delta(E) = \pi^{-1}(E)$. It is then not hard to see that there is a fan $\Sigma$ supported on~$C$ such that the map $E \mapsto \Delta(E)$ is additive when restricted to each cone in $\Sigma$ (see \cite[Proposition~1.4]{KV}).
When the line bundles belong to a cone of the above fan in which the Newton--Okounkov polytope is additive, the formula for their intersection number in Corollary \ref{prop-int-number-several-divisors} can be simplified to the mixed volume of their corresponding Newton--Okounkov polytopes. \begin{Cor} \label{cor-mixed-vol-Brion-Kaz} Suppose $E_1, \ldots, E_n$ are $G$-linear systems which lie in a cone on which the map $E \mapsto \Delta(E)$ is additive. We then have \begin{gather*} E_1 \cdots E_n = n! V(\Delta(E_1), \ldots, \Delta(E_n)),\end{gather*} where $V$ denotes the mixed volume of polytopes in $n$-dimensional Euclidean space. \end{Cor} \begin{proof}The corollary immediately follows from Proposition~\ref{prop-Brion-Kaz-linear-system}, additivity of the Newton--Okounkov polytope and the multi-linearity of mixed volume. \end{proof}
\section{A version of BKK theorem for ring of conditions}\label{sec-BKK-spherical} Proposition \ref{prop-Brion-Kaz-linear-system} and Corollary~\ref{cor-mixed-vol-Brion-Kaz} express the intersection numbers of generic elements of $G$-linear systems on $X=G/H$ in terms of volumes/mixed volumes of certain (virtual) polytopes. In this section we will modify this so that it computes the intersection numbers of hypersurfaces in the ring of conditions of $X$.
We have to deal with the following two problems: (1) In general an algebraic hypersurface in~$X$ is not a section of a $G$-linearized line bundle on $X$. (Although in some cases, for example for $X=G=(\C^*)^n$, any hypersurface in~$X$ is indeed a section of some $G$-linearized line bundle on~$X$.) (2) One has to show that it is possible to make any given finite collection of hypersurfaces ``generic enough'' by moving its members via generic elements of the group~$G$.
Fortunately, it is known how to overcome the first problem: any hypersurface after multiplication by a~natural number $m$ becomes a section of a $G$-linearized line bundle. The second problem is also not complicated to solve (see below).
Let $G/H$ be a spherical homogeneous space of dimension~$n$ and let~$D$ be an effective divisor (i.e., a linear combination of prime divisors with nonnegative coefficients) in~$G/H$. Let ${\mathcal L} = \mathcal{O}(D)$ be the corresponding line bundle on~$G/H$. We would like to associate a Newton--Okounkov polytope to the divisor~$D$ such that the volume of this polytope gives the self-intersection number of $D$ in the ring of conditions of~$G/H$. To this end, we need to equip~${\mathcal L}$ with a $G$-linearization. The following results are well-known (see \cite{KKLV, Popov}).
\begin{Th} \label{th-exist-G-lin} Let $X$ be a normal $G$-variety and let $\mathcal{L}$ be a line bundle on~$X$. Then there is $m>0$ such that $\mathcal{L}^{\otimes m}$ has a $G$-linearization. \end{Th}
Alternatively, there exists a finite covering $\tilde{G} \to G$ where $\tilde{G}$ is a connected reductive group with trivial Picard group. Moreover, after replacing $G$ with $\tilde{G}$, every line bundle on $X$ admits a linearization (see remark after Proposition 2.4 and Proposition 4.6 in \cite{KKLV}). Thus, every hypersurface is a section of a linearizable line bundle.
\begin{Th} \label{th-G-lin-G/H} The assignment $\chi \mapsto \mathcal{L}_\chi$ gives a one-to-one correspondence between characters of $H$ and $G$-linearized line bundles $\mathcal{L}$ on $G/H$. \end{Th}
Let $m>0$ be such that ${\mathcal L}^{\otimes m}$ has a $G$-linearization and fix a $G$-linearization for ${\mathcal L}^{\otimes m}$. Let $s_m \in H^0\big(G/H, {\mathcal L}^{\otimes m}\big)$ be the section defining $mD$, i.e., $mD = \operatorname{div}(s_m)$. Let $E_m \subset H^0\big(G/H, {\mathcal L}^{\otimes m}\big)$ be the $G$-invariant subspace generated by~$s_m$. Let us denote the Newton--Okounkov polytope associated to $E_m$ by $\Delta(mD)$ (see Section~\ref{sec-NO-polytope-sph-var}). We note that the linear system $E_m$ and the corresponding polytope $\Delta(mD)$ depend on the choice of linearization of~$G/H$. We put \begin{gather*} \Delta(D) = (1/m) \Delta(mD).\end{gather*}
\begin{Th} \label{th-main}The self-intersection number of $D$ in the ring of conditions of~$G/H$ is \linebreak $n! \vol_n(\Delta(D))$. \end{Th} \begin{proof} The theorem can be reduced to Proposition~\ref{prop-Brion-Kaz-linear-system} via Kleiman's transversality theorem and existence of a good compactification for~$D$ (Theorem~\ref{th-exist-good-comp}). Without loss of generality let $m=1$ and put $E=E_m$. Let $\overline{X}$ be a spherical embedding of $G/H$ which provides a good compactification for~$D$, i.e., for each $G$-orbit $O\subset \overline{X}$ such that $\overline{D}\cap O\neq \varnothing$ we have $\overline{D}\cap O$ is a hypersurface in~$O$. By Kleiman's transversality theorem if $g_1, \ldots, g_n \in G$ are in general position then for any $G$-orbit $W$ in $\overline{X}$ all the $\overline{g_i \cdot D}\cap W$ intersect strongly transversally in $W$. In particular if $W\neq G/H$ the intersections are empty. Thus the self-intersection number of~$D$ in the ring of conditions is equal to the self-intersection number of the divisors $\overline{g_i \cdot D}$ in $\overline{X}$.
On the other hand, we know the following: take $s_1, \ldots, s_n \in E$ and let $H_i = \{x \in G/H \,|\, s_i(x) \allowbreak = 0 \}$ with closure $\overline{H}_i \subset \overline{X}$. Suppose all the $\overline{H}_i$ intersect transversally. In particular, $\overline{H}_1 \cap \cdots \cap \overline{H}_n$ lies in $G/H$, that is, there are no intersection points at infinity. Then $[E, \ldots, E]$ is equal to $|H_1 \cap \cdots \cap H_n|$, the number of solutions $x \in G/H$ of the system $s_1(x) = \cdots = s_n(x) = 0$ (we note that since $E$ is $G$-invariant the set of common zeros of all the sections in~$E$ is empty). It follows that if the $g_i \in G$ are in general position we have
\begin{gather*} [E, \ldots, E] = |\overline{g_1 \cdot D} \cap \cdots \cap \overline{g_n \cdot D}|. \end{gather*} Now since $G$ is connected, the action of $G$ on the Picard group of $\overline{X}$ is trivial. Hence for any $g \in G$, the divisor $\overline{g \cdot D}$ is linearly equivalent to~$\overline{D}$. Putting everything together we conclude that \begin{gather*} [E, \ldots, E] = \overline{D}^n.\end{gather*}
The theorem now follows from Proposition \ref{prop-Brion-Kaz-linear-system}. \end{proof}
Let $D_1, \ldots, D_n$ be $n$ irreducible hypersurfaces in $G/H$. For each $i$, fix a $G$-linearization for the line bundle $\mathcal{O}(D_i)$. For each collection $I = \{i_1, \ldots, i_s\} \subset \{1, \ldots, n\}$ equip the line bundle $\mathcal{O}(D_{i_1} + \cdots + D_{i_s})$ with the $G$-linearization induced from those of the $D_i$. Let $\Delta_I$ be the Newton--Okounkov body $\Delta(D_{i_1} + \cdots + D_{i_s})$. The following is an immediate corollary of Theorem \ref{th-main}. \begin{Cor} \label{cor-main} The intersection number of the $D_i$ in the ring of conditions is given by the following formula\begin{gather*} (-1)^n D_1 \cdots D_n = - \sum_{i} \vol_n(\Delta_i) + \sum_{i < j} \vol_n(\Delta_{i,j}) + \cdots + (-1)^n \vol_n(\Delta_{1, \ldots, n}). \end{gather*} \end{Cor}
\section{Two algebra constructions} \label{sec-alg-construction} In this section we discuss two similar constructions of graded commutative algebras with Poincar\'{e} duality associated with a $\Q$-vector space $V$. The first construction uses a symmetric $n$-linear function $F$ on $V$. The second construction uses a homogeneous degree $n$ polynomial $P$ on $V$. The constructions produce isomorphic graded algebras with Poincar\'{e} duality if $F$ and $P$ satisfy $n! F(x,\dots,x)=P(x)$ for all $x \in V$. We will give the results without proofs as all the proofs are straightforward. \subsection{Algebra constructed from a symmetric multi-linear function} Let $V$ be a (possibly infinite-dimensional) vector space over $\Q$ equipped with a nonzero symmetric $n$-linear function $F\colon V \times \cdots \times V \to \Q$. To $(V, F)$ one can associate a graded $\Q$-algebra $A _F= A = \bigoplus_{i=0}^n A_i$ by the following construction. Let $\Sym(V)= \bigoplus_{i \geq 0} \Sym^i(V)$ be the symmetric algebra of the vector space $V$. With $F$ one can associate the function $F_s$ on $\Sym(V)$ as follows: (1)~if $a\in \Sym^n(V)$ and $a=\sum \lambda^iv_1^i\cdots v^i_n$ where $\lambda_i\in \Q$, $v^i_j\in V$ then $F_s(a)=\sum\lambda_i F\big(v_1^i, \ldots, v^i_n\big)$; (2)~if $a\in \Sym^k(V)$ where $k \neq n$ then $F_s(a)=0$. Let $I_{F_s}\subset \Sym(V)$ be the subset defined by $a\in I_{F_s}$ if and only if for any $b\in \Sym(V)$ the identity $F_s(ab)=0$ holds. It is easy to see that~$I_{F_s}$ is a homogeneous ideal in $\Sym(V)$. We define $A$ to be the graded algebra $\Sym(V) / I_{F_s}$. It follows from the construction that $A$ has the following properties: \begin{itemize}\itemsep=0pt \item $A_0=\Q$. \item $A_k=0$ for $k>n$. \item $\dim _{\Q}A_n=1$, moreover the function $F_s$ induces a linear isomorphism $f_s\colon A_n \rightarrow \Q$. \item $A_1$ coincides with the image of $V=\Sym^1(V)$ in $A=\Sym(V)/I_{F_s}$. \item $A$ is generated as an algebra by $A_0$ and $A_1$. \item The pairing $B_k\colon A_k\times A_{n-k}\rightarrow \Q$ by formula $B_k(a_k, a_{n-k})=f_s(a_k a_{n-k})$ is non-degenerate. It provides a ``Poincar\'e duality'' on~$A$. \end{itemize}
\subsection{Algebra constructed from a homogeneous polynomial} Let $V$ be a (possibly infinite-dimensional) $\Q$-vector space equipped with a homogeneous deg\-ree~$n$ polynomial $P\colon V \to \Q$ (recall that a function $P\colon V \to \Q$ is a polynomial or a~homogeneous degree $n$ polynomial if its restriction to any finite-dimensional subspace $V_1\subset V$ is correspondingly a polynomial or a homogeneous degree~$n$ polynomial). To~$(V, P)$ one can associate a graded $\Q$-algebra $A_P = A = \bigoplus_{i=0}^n A_i$ as we explain below.
First we recall the algebra $\D = \D_V$ of constant coefficient differential operators on~$V$. For a~vector $v \in V$, let $L_v$ be the differentiation operator (Lie derivative) on the space of polynomial functions on $V$ defined as follows. Let $f$ be a polynomial function on~$V$, then \begin{gather*} L_v(f)(x) = \lim_{t \to 0} \frac{f(x+tv) - f(x)}{t}.\end{gather*} The algebra $\D$ is defined to be the subalgebra of the $\Q$-algebra of linear operators on the space of polynomials on~$V$ generated by the Lie derivatives~$L_v$ for all $v \in V$ and by multiplication by scalars $c \in \Q$. The algebra $\D$ is commutative since Lie derivatives against constant vector fields commute. The algebra $\D$ can be naturally identified with the symmetric algebra $\Sym(V)$. When $V \cong \Q^n$ is finite-dimensional, $\D$ can be realized as follows: Fix a basis for~$V$ and let $(x_1, \ldots, x_n)$ denote the coordinate functions with respect to this basis. Each element of $\D$ is then a polynomial expression, with constant coefficients, in the differential operators $\xi_1=\partial/\partial x_1, \ldots,\xi_n= \partial/\partial x_n$. That is,
\begin{gather*} \D =\bigg\{ f(\partial/\partial x_1, \ldots, \partial/\partial x_n) \,|\, f = \sum_{\alpha = (a_1, \ldots, a_n)} c_\alpha \xi_1^{a_1} \cdots \xi_n^{a_n} \in \Q[\xi_1, \ldots, \xi_n]\bigg\}. \end{gather*}
Now let $I_P$ be the ideal of all differential operators $D \in \D$ such that $D (P) = 0$, i.e., those differential operators that annihilate $P$. We define $A_P$ to be the quotient algebra $\D / I_P$. The algebra $A = A_P$ has a natural grading $A= \bigoplus_{i=0}^n A_i$ (this is because $I_P$ is a homogenous ideal). It follows from the construction that $A = A_P$ has the following properties: \begin{itemize}\itemsep=0pt \item $A_0=\Q$. \item $A_k=0$ for $k>n$. \item $\dim _{\Q}A_n=1$. Moreover $P$ defines a linear function on homogeneous order $n$ operators $D_n$ sending $D_n$ to the constant $D_n(P)$. It induces a linear isomorphism $A_n \to \Q$. \item $A_1$ coincides with the image of $V$ in $A=\D / I_P$. \item $A$ is generated as an algebra by $A_0$ and $A_1$. \item One can define the pairing $B_k\colon A_k\times A_{n-k}\rightarrow \Q$ by $B_k(a_k, a_{n-k}) = D_k \circ D_{n-k}(P)$ where~$D_k$ and~$D_{n-k}$ are any preimages of the elements $a_k,a_{n-k}\in \D/I_P$ in~$D$. This pairing is well-defined since the preimages $D_k$ and $D_{n-k}$ are defined up to addition of elements from the ideal~$I_P$ and elements from~$I_P$ annihilate~$P$. It is easy to show that the pairing is non-degenerate and gives a ``Poincar\'e duality'' on $A$. \end{itemize}
One has the following. \begin{Th} \label{th-ring-of-diff-op} Let $A = \bigoplus_{i=0}^n A_i$ be a graded algebra over $\Q$ with the following properties: \begin{itemize}\itemsep=0pt \item[$(1)$] $A_0=\Q$. \item[$(2)$] There is a linear isomorphism $f\colon A_n \to \Q$. \item[$(3)$] $A$ is generated as an algebra by $A_0$ and $A_1$. \item[$(4)$] There is a linear projection $\pi\colon V\to A_1$ of a given $\Q$-vector space $V$ onto $A_1$. \item[$(5)$] For any $0\leq k\leq n$ the pairing $B_k\colon A_k \times A_{n-k} \to \Q$ defined by $B_k(a_k,a_{n-k})=f(a_ka_{n-k})$ is non-degenerate. \end{itemize} Then $A$ can be described as follows: \begin{itemize}\itemsep=0pt \item[$(a)$] $A$ is isomorphic to the algebra associated to $(V, P)$ where $P$ is the homogeneous degree $n$ polynomial on $V$ defined by \begin{gather*} P(x)=(1/n!) f\big(y^n\big),\end{gather*} where $x\in V$ and $y=\pi(x)$. \item[$(b)$] $A$ is isomorphic to the algebra associated to $(V, F)$ where~$F$ is the symmetric $n$-linear function on~$V$ equal to the polarization of the polynomial~$P$ from~$(a)$ multiplied by~$n!$. \end{itemize} \end{Th}
A generalization of Theorem \ref{th-ring-of-diff-op} for commutative algebras $A$ with Poincar\'{e} duality that are not necessarily generated by~$A_0$ and~$A_1$ can be found in \cite{EstKhKaz}. Also some related material can be found in \cite[Exercise~21.7]{Eisenbud} and \cite{Kaveh-note}.
\section[A description of ring of complete intersections of $G/H$]{A description of ring of complete intersections of $\boldsymbol{G/H}$} \label{sec-descp-ring-comp-int}
This section contains the main result of the paper which gives descriptions of the ring of complete intersections of a spherical homogeneous space in terms of volumes of polytopes. It is an analogue of the description of ring of conditions of torus~$(\C^*)^n$ in~\cite{Kaz-Khov}.
Recall $\mathcal{R}'(G/H)$ denotes the ring of complete intersections where $G/H$ is a spherical homogeneous space. We use Theorem~\ref{th-ring-of-diff-op} to give two descriptions of the ring $\mathcal{R}'_\Q(G/H) = \mathcal{R}'(G/H) \otimes_{\Z} \Q$ in terms of volumes of polytopes. Let $\Div(G/H)$ be the group of divisors on~$G/H$ with $\Div_\Q(G/H) = \Div(G/H) \otimes_\Z \Q$ the corresponding $\Q$-vector space. For an effective divisor~$D$ in~$G/H$ consider the associated Newton--Okounkov polytope~$\Delta(D)$ introduced in Section~\ref{sec-BKK-spherical}.
\begin{Th}The ring $\mathcal{R}'_\Q(G/H)$ is isomorphic to the algebra associated to the vector space $V = \Div_\Q(G/H)$ and the $n$-linear function $F\colon V \times \cdots \times V \to \Q$ whose value on an $n$-tuple $D_1, \ldots, D_n$ of effective divisors in $G/H$ is given by $$(-1)^n F(D_1, \ldots, D_n) = - \sum_{i} \vol_n(\Delta_i) + \sum_{i < j} \vol_n(\Delta_{i,j}) + \cdots + (-1)^n \vol_n(\Delta_{1, \ldots, n}).$$ Here for $I \subset \{1, \ldots, n\}$ we set $\Delta_I =\Delta\big(\sum_{i \in I} D_i\big)$. Moreover, if $D_1, \ldots, D_n$ lie in a cone in $\Div_\Q(G/H)$ such that $D \mapsto \Delta(D)$ is linear then $F(D_1, \ldots, D_n) = n! V(\Delta(D_1), \ldots, \Delta(D_n))$ where~$V$ is the mixed volume $($see Section~{\rm \ref{sec-NO-polytope-sph-var})}. \end{Th} \begin{proof}The theorem follows from Theorem \ref{th-main}, Corollaries~\ref{cor-main} and~\ref{cor-mixed-vol-Brion-Kaz}, and Theo\-rem~\ref{th-ring-of-diff-op}. \end{proof}
Similarly, we have the following description of $\mathcal{R}'_\Q(G/H)$ as a quotient of the ring of dif\-fe\-rential operators.
\begin{Th} \label{th-ring-complete-intersec-vol-poly} The ring $\mathcal{R}'_\Q(G/H)$ is isomorphic to the algebra associated to the vector spa\-ce~$V = \Div_\Q(G/H)$ and the polynomial~$P$ whose value on an effective divisor~$D$ is equal to~$\vol_n(\Delta(D))$. \end{Th} \begin{proof}The theorem follows from Theorem \ref{th-main}, Corollary \ref{cor-main} and Theorem~\ref{th-ring-of-diff-op}. \end{proof}
Finally, the ring of complete intersections $\mathcal{R}'(G/H)$ itself can be realized as the subring of~$\mathcal{R}'_\Q(G/H)$ generated by~$\Z$ and the image of~$\Div(G/H)$.
\section{Ring of complete intersections of an arbitrary variety}\label{sec-ring-complete-intersec-general}
Let $X$ be a variety (not necessarily complete). In \cite{KKh-MMJ, KKh-CMB} the authors consider the collection $\K(X)$ of all finite-dimensional vector subspaces of the field of rational functions $\C(X)$. This collection is a semigroup with respect to the multiplication of subspaces. Namely, for $L, M \in \K(X)$ we define
\begin{gather*} LM = \text{span}\{fg \,|\, f \in L, g \in M\}.\end{gather*} We then consider the Grothendieck group $\G(X)$ associated to the semigroup of subspaces $\K(X)$.
Given $L_1, \ldots, L_n \in \K(X)$, one introduces {\it intersection index} $[L_1, \ldots, L_n] \in \Z$. The intersection index $[L_1, \ldots , L_n]$ is the number of solutions $x \in X$ of a generic system of equations $f_1(x) = \cdots = f_n = 0$, where $f_i \in L_i$. In counting the solutions, we neglect the solutions $x$ at which all the functions in some space $L_i$ vanish as well as the solutions at which at least one function from some space $L_i$ has a pole. In \cite{KKh-MMJ, KKh-CMB} it is shown that the intersection index is well-defined and is multi-additive with respect to multiplication of subspaces. Thus it naturally extends to the Grothendieck group $\G(X)$ and to $\G_\Q(X) = \G(X) \otimes_\Z \Q$ the corresponding $\Q$-vector space. In \cite[Section~4.3]{KKh-Annals} we associate to each element $L\in \K(X)$ its Newton--Okounkov body $\Delta(L) \subset \R^n$ in such a way that the following conditions hold: \begin{enumerate}\itemsep=0pt \item[1)] $\Delta (L_1)+\Delta (L_2) \subset \Delta (L_1 L_2)$, \item[2)] $[L,\dots,L]=n! \vol_n(\Delta(L)$. \end{enumerate}
We view the intersection index of subspaces of rational functions as a birational version of the intersection theory of (Cartier) divisors (more generally linear systems). The main result in \cite{KKh-CMB} is that the Grothendieck group~$\G(X)$ is naturally isomorphic to the group of b-divisors (introduced by Shokurov). Roughly speaking, a b-divisor is an equivalence class of Cartier divisors on any birational model of~$X$. The isomorphism sends the intersection index to the usual intersection number of Cartier divisors.
\begin{Def}[ring of complete intersections] \label{def-ring-of-complete-intersec-general} We call the ring associated to the $\Q$-vector space $\G_\Q(X)$ and the intersection index, the {\it ring of complete intersections} over~$\Q$ of~$X$ and denote it by $\A_\Q(X)$. We call the subring $\A(X)$ of $\A_\Q(X)$ generated by~$\Z$ and the image of~$\G(X)$, the {\it ring of complete intersections} of~$X$. \end{Def}
As above Theorem~\ref{th-ring-of-diff-op} suggests two descriptions of the ring $\A(X)$ of complete intersections of~$X$ in terms of volumes of Newton--Okounkov bodies associated to elements of~$\G(X)$.
\LastPageEnding
\end{document} | arXiv |
Journal of Surfactants and Detergents
Elucidation of Softening Mechanism in Rinse-Cycle Fabric Softeners. Part 2... Elucidation of Softening Mechanism in Rinse-Cycle Fabric Softeners. Part 2: Uneven Adsorption—The Key Phenomenon to the Effect of Fabric Softeners
Statistical Analysis of Optimal Ultrasound Emulsification Parameters in... Statistical Analysis of Optimal Ultrasound Emulsification Parameters in Thistle-Oil Nanoemulsions
Synthesis, Surface and Antimicrobial Activity of Piperidine-Based Sulfobetaines Synthesis, Surface and Antimicrobial Activity of Piperidine-Based Sulfobetaines
Recalculating GHG emissions saving of palm oil biodiesel Recalculating GHG emissions saving of palm oil biodiesel
Calcium and fat metabolic balance, and gastrointestinal tolerance in term... Calcium and fat metabolic balance, and gastrointestinal tolerance in term infants fed milk-based formulas with and without palm olein and palm kernel oils: a randomized blinded crossover study
New and updated life cycle inventories for surfactants used in European... New and updated life cycle inventories for surfactants used in European detergents: summary of the ERASM surfactant life cycle and ecofootprinting project
Introducing a new GHG emission calculation approach for alternative methane... Introducing a new GHG emission calculation approach for alternative methane reduction measures in the wastewater treatment of a palm oil mill
Palm oil and derivatives: fuels or potential fuels? Palm oil and derivatives: fuels or potential fuels?
The Mechanical Properties of Medium Density Rigid Polyurethane Biofoam The Mechanical Properties of Medium Density Rigid Polyurethane Biofoam
Comparison of Oleo- vs Petro-Sourcing of Fatty Alcohols via Cradle-to-Gate Life Cycle Assessment
Journal of Surfactants and Detergents, Sep 2016
Jignesh Shah, Erdem Arslan, John Cirucci, Julie O'Brien, Dave Moss
Erdem Arslan
John Cirucci
Julie O'Brien
Dave Moss
Alcohol ethoxylates surfactants are produced via ethoxylation of fatty alcohol (FA) with ethylene oxide. The source of FA could be either palm kernel oil (PKO) or petrochemicals. The study aimed to compare the potential environmental impacts for PKO-derived FA (PKO-FA) and petrochemicals-derived FA (petro-FA). Cradle-to-gate life cycle assessment has been performed for this purpose because it enables understanding of the impacts across the life cycle and impact categories. The results show that petro-FA has overall lower average greenhouse gas (GHG) emissions (~2.97 kg CO2e) compared to PKO-FA (~5.27 kg CO2e). (1) The practices in land use change for palm plantations, (2) end-of-life treatment for palm oil mill wastewater effluent and (3) end-of-life treatment for empty fruit bunches are the three determining factors for the environmental impacts of PKO-FA. For petro-FA, n-olefin production, ethylene production and thermal energy production are the main factors. We found the judicious decisions on land use change, effluent treatment and solid waste treatment are key to making PKO-FA environmentally sustainable. The sensitivity results show the broad distribution for PKO-FA due to varying practices in palm cultivation. PKO-FA has higher impacts on average for 12 out of 18 impact categories evaluated. For the base case, when accounted for uncertainty and sensitivity analyses results, the study finds that marine eutrophication, agricultural land occupation, natural land occupation, fossil depletion, particulate matter formation, and water depletion are affected by the sourcing decision. The sourcing of FA involves trade-offs and depends on the specific practices through the PKO life cycle from an environmental impact perspective.
A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.
Alternatively, you can download the file locally and open with any standalone PDF reader:
https://link.springer.com/content/pdf/10.1007%2Fs11743-016-1867-y.pdf
Journal of Surfactants and Detergents November 2016, Volume 19, Issue 6, pp 1333–1351 | Cite as Comparison of Oleo- vs Petro-Sourcing of Fatty Alcohols via Cradle-to-Gate Life Cycle Assessment AuthorsAuthors and affiliations Jignesh ShahErdem ArslanJohn CirucciJulie O'BrienDave Moss Open Access Original Article First Online: 12 September 2016 Received: 21 November 2014 Accepted: 09 August 2016 2 Shares 1.7k Downloads Abstract Alcohol ethoxylates surfactants are produced via ethoxylation of fatty alcohol (FA) with ethylene oxide. The source of FA could be either palm kernel oil (PKO) or petrochemicals. The study aimed to compare the potential environmental impacts for PKO-derived FA (PKO-FA) and petrochemicals-derived FA (petro-FA). Cradle-to-gate life cycle assessment has been performed for this purpose because it enables understanding of the impacts across the life cycle and impact categories. The results show that petro-FA has overall lower average greenhouse gas (GHG) emissions (~2.97 kg CO2e) compared to PKO-FA (~5.27 kg CO2e). (1) The practices in land use change for palm plantations, (2) end-of-life treatment for palm oil mill wastewater effluent and (3) end-of-life treatment for empty fruit bunches are the three determining factors for the environmental impacts of PKO-FA. For petro-FA, n-olefin production, ethylene production and thermal energy production are the main factors. We found the judicious decisions on land use change, effluent treatment and solid waste treatment are key to making PKO-FA environmentally sustainable. The sensitivity results show the broad distribution for PKO-FA due to varying practices in palm cultivation. PKO-FA has higher impacts on average for 12 out of 18 impact categories evaluated. For the base case, when accounted for uncertainty and sensitivity analyses results, the study finds that marine eutrophication, agricultural land occupation, natural land occupation, fossil depletion, particulate matter formation, and water depletion are affected by the sourcing decision. The sourcing of FA involves trade-offs and depends on the specific practices through the PKO life cycle from an environmental impact perspective. KeywordsLife cycle assessment Alcohol ethoxylates Fatty alcohol Palm kernel oil Oleochemicals Cradle-to-gate analysis Greenhouse gases Environmental impacts Malaysian palm LCA Abbreviations AE Alcohol ethoxylates AGB Above ground biomass BGB Below ground biomass Ca Calcium CO2e Carbon dioxide equivalent of global warming potential COD Chemical oxygen demand DOM Dead organic matter EFB Empty palm fruit bunches EI3.0 EcoInvent v3.0 database EO Ethylene oxide EPA US Environmental Protection Agency FA Fatty alcohol FFB Fresh palm fruit bunches GHG Greenhouse gases GLO Global K Potassium K2O Potassium oxide LCA Life cycle assessment LCI Life cycle inventory LCIA Life cycle impact assessment LUC Land use change Mg Magnesium MY Malaysia N Nitrogen NP Nonylphenol NPEs Nonylphenol ethoxylates Oleo-FA Fatty alcohol produced from oleochemical feedstock P Phosphorus P2O5 Phosphorus pentaoxide Petro-FA Fatty alcohol produced from petrochemical feedstock PKE Palm kernel extract PKO Palm kernel oil PKO-FA Fatty alcohol produced from palm kernel oil as feedstock PKS Palm kernel shells PO Palm oil POME Palm oil mill effluent RoW Rest of world SERC Southeastern Electric Reliability Council US United States of America Electronic supplementary material The online version of this article (doi: 10.1007/s11743-016-1867-y) contains supplementary material, which is available to authorized users. Introduction Non-ionic surfactants are used in many products such as "detergents, cleaners, degreasers, dry cleaning aids, petroleum dispersants, emulsifiers, wetting agents, adhesives, agrochemicals, including indoor pesticides, cosmetics, paper and textile processing formulations, prewash spotters, metalworking fluids, oilfield chemicals, paints and coatings, and dust control agents" [1]. Nonylphenol ethoxylates (NPE) are popular non-ionic surfactants "due to their effectiveness, economy and ease of handling and formulating" [2]. However, NPE are highly toxic to aquatic organisms [1, 2] and degrade into nonylphenol (NP), which "is persistent in the aquatic environment, moderately bioaccumulative, and extremely toxic to aquatic organisms" [1]. Due to these concerns, the US Environmental Protection Agency (EPA) and detergents manufacturers cooperated to eliminate their use in household laundry detergents [3]. Also, EPA has laid out action plan to address widespread use in large quantities in industrial laundry detergents under the Toxic Substances Control Act [3]. Due to higher biodegradability and unobjectionable aquatic toxicity profiles of the degradation products, alcohol ethoxylates (AE) are used to replace NPE [2]. AE are also nonionic surfactants that are produced via ethoxylation of fatty alcohol (FA) with ethylene oxide (EO). This involves condensation of polyethylene glycol ether groups on FA chains. Depending on the FA structure and number of polyether units, the physical and chemical properties of AE vary [4]. When the chain length of FA ranges in C9–C16, the properties are suitable for detergents production [4] for industrial and institutional cleaning products including hard surface cleaners and laundry detergents. In addition to these product stewardship practices, sustainability minded companies are also evaluating the environmental impact of their operations, as well as the burdens from the other phases of product life cycle, including raw material sourcing. With respective to raw material sourcing, a bio-based value chain is often assumed to have less environment impact, at least from greenhouse gases (GHG) emissions perspective. For AE producers, the source of FA could be either bio-based oleochemicals (oleo-FA) or petrochemicals (petro-FA). These AE with like structures (linearity-wise and chain lengths) are readily biodegradable independent of alcohol feedstock and their aquatic toxicities are function of FA chain length, branching and amount of ethoxylation [5]. These similarities in the environmental performance at the product's use and end-of-life phases do not capture differences in environmental impacts during the raw material production. The detailed understanding of the raw material requirements, energy consumption, waste generations and disposal, and emissions, along with the resulting impacts on the environment, is important for sustainability-minded AE consumers and other supply chain participants. Such an understanding could be gained through a life cycle assessment (LCA) approach as it allows incorporation of all relevant life cycle stages along with diverse types of environmental impacts. LCA is the comprehensive evaluation of the process in a cradle-to-grave, cradle-to-gate or gate-to-gate fashion to understand the environmental aspects of a product or a service. LCA study involves understanding the assessment goal and scope; estimating the amount of raw materials and energy input, waste generated, and emissions from the process for all the relevant life cycle stages (Life Cycle Inventory, LCI); translating LCI results to understand and evaluate the potential environmental impacts (Life Cycle Inventory Assessment, LCIA); and formulating conclusions and recommendations based on the results. LCA has been used since the 1960s and its application for surfactants started with developing of LCI [6, 7, 8]. These early studies compiled data on the natural resources consumed, wastes generated, and emissions for then-industry practices for AE production from both petrochemical and oleochemical feedstocks. However, the impacts from land transformation for palm plantation were not covered and the scope was limited to LCI due to lack of agreed-upon LCIA methods. The results from these LCI studies did not find any scientific basis for any single feedstock source to be environmentally superior [6, 8] as "benefits in one direction (e.g., renewability) are offset by liabilities in another (intensive land-use requirements)" [6]. LCA studies for detergents since then have been based on the results of these earlier studies and are for the products with AE and FA as ingredients such as that by Kapur et al. 2012 [9]. In 2007, the 'ecoinvent data v2.0' project [4] updated the LCI results from the earlier studies with land use, transportation and infrastructure information. However, again the LCIA and conclusions steps were not done. The LCA results from production of palm derived oil, which is used for FA production, have been published [10, 11, 12, 13]. The scopes of these studies vary from evaluating the impacts of oil from palm fruits and/or palm kernels [11, 12] to evaluating the various practices for palm oil mill operations [10, 13]. Overall, there has been no LCA study with LCIA results evaluating impacts of feedstocks for FA production. This study aims to contribute towards this gap and presents the findings for understanding the relative environmental performances of sourcing FA from petrochemical and palm kernel oil (PKO) feedstocks. These findings are expected to contribute to the discussions towards such an understanding rather than a final conclusion as such. Experimental Methods While LCA has been around since 1960s, it was not widely adopted until the early 1990s. Currently, LCA is guided by international standards (ISO 14040 to ISO 14044), which have proposed the framework for conducting an LCA study [14]. As per this framework, LCA involves four iterative steps: (1) Goal and scope definition, (2) Life cycle inventory analysis (LCI), (3) Life cycle impact assessment (LCIA) and (4) interpretation. The intended and expected applications of the results help define the goal and scope. The results and findings of LCI are checked with goal and scope to decide whether goal and scope should be modified or additional effort should be spent on LCI step. Similarly, LCIA results and findings are evaluated against previous two steps. The results from LCI and LCIA steps are interpreted with respect to goal and scope and for robustness. The results of this fourth step are evaluated against the other three steps for any modification or additional efforts. This standard methodology was used for this study and the detailed descriptions could be found in ISO 14040 through ISO 14044. The goal of this study was to create an understanding of the relative environmental impacts for selecting between petro-FA and PKO-FA1 for use in AE production. A comparative LCA study was performed because it allows simplification of the scope to the dissimilar parts of each process. FA are predominantly linear and monohydric aliphatic alcohols with chain lengths between C6 and C22 [4]. Despite the differences in FA sourcing, "the chemical and physical properties of the final product [AE] are similar for all three pathways [petrochemical, PKO, coconut oil], provided their carbon chain length and ethoxylate distribution is similar" [4]. However, depending on the catalyst and olefins used, not all petro-FA produced via hydroformylation technology compete with PKO-FA [15]. The scope of this study has been limited to FA that could be used interchangeably irrespective of feedstocks. Once a FA is produced and delivered, the environmental impacts are similar irrespective of FA sourcing decision. Likewise, FA sourcing decisions do not impact AE use and AE end-of-life treatment. Hence, a cradle-to-gate type boundary has been selected for this study (see Fig. 1) and all the results have been converted to one kg of FA delivered to AE production facility. In LCA terms, the functional unit for this study is one kg of FA delivered to AE production facility in Gulf Coast region of United States (US). The study has been performed through modeling in SimaPro 8.0 software for LCA studies. Open image in new window Fig. 1 Major process steps for the various fatty alcohol production pathways. Adapted from [4] The modeling in LCA requires input of quantities of raw materials and energy required, waste generated and emissions from the FA production process. Similarly, the production and distribution of these raw materials and their utilization generate the environmental impacts. For PKO-FA, the impacts are also generated from the land transformation for palm plantations and from the waste generated during the palm oil mill operation. For all these processes and the impacts including the production and delivery of FA, the data used for this study are secondary and literature data. Petro-FA The petro-FA can be produced either via Ziegler process using hydrogenated catalyst triethylaluminium for alkylation of ethylene or via Oxo process using syngas for hydroformylation of long chain olefins [4]. The Ziegler process involves hydrogenation, ethylation, growth reaction, oxidation and hydrolysis of ethylene over Aluminum powder in presence of hydrocarbon solvent. While solvent is recovered, Aluminum exits the system as co-product alumina hydrate. Alkanes and oxygen-containing compounds are formed as byproducts [16]. Oxo process involves catalytic hydroformylation, catalyst recovery, catalytic hydrogenation of intermediate aldehydes and alcohol distillation of olefins and synthesis gas. While the catalyst consumption is minimal here, there are isomerization byproducts formed during hydroformylation, which are taken out during distillation as bottom heavies and overhead lights [16]. EcoInvent 3.0 (EI3.0) dataset for petro-FA production ("Fatty alcohol {RoW}| production, petrochemical | Alloc Def, U") includes inputs and emissions reflecting a mix of 82 % of fatty alcohols produced with the Oxo process and 18 % produced by the Ziegler process. This dataset has taken the material inputs (ethylene, n-olefin, natural gas and crude oil), energy inputs (heat and electricity), solid waste generation, emissions to air, emissions to water, and impacts from transportation from literature sources while estimated water consumption and infrastructure. The disposal of solid waste is included via the process for municipal solid waste incineration and the effluent is captured through emissions to water. Further, it must be noted that this 'gate-to-gate' process also includes the impacts from some upstream processes (see Petro-upstream section). Table 1 summarizes the gate-to-gate LCI for petro-FA production. Table 1 'Gate-to-gate' LCI for fatty alcohol production and delivery Petro-FA PKO-FA Data sources Raw materials/feedstocks N-olefins 0.778 kg Petro: literature value by EI3.0 Ethylene 0.177 kg Petro: literature value by EI3.0 Natural gas 0.0762 m3 0.0125 m3 Petro: literature value by EI3.0; PKO: Adapted data from ECOSOL study by EI3.0 Crude oil 0.012 kg Petro: literature value by EI3.0 Aluminum powder 0.0083 kg Petro: estimated by author based on stoichiometry Cobalt 1.39E-5 kg Petro: Estimated by author based on stoichiometry Palm kernel oil 0.9999 kg PKO: estimated by EI3.0 based on ECOSOL study data Hydrogen 0.006 kg PKO: estimated by EI3.0 based on ECOSOL study data Utilities and infrastructure Water (cooling) 0.024 m3 0.024 m3 Assumed by EI3.0 based on literature for a large chemical plant Water (process) 0.006 m3 0.006 m3 Estimated by EI3.0 as 25 % of the cooling water amount Heat 5.81 MJ 11.83 MJ PKO: estimated by EI3.0 based on ECOSOL study data Electricity 0.166 kWh 0.161 kWh Transportation (road) 0.06 tkm 0.06 tkm Updated by author based on the geographic scope Transportation (ocean) 0.0 tkm 20 tkm Updated by author based on the geographic scope Chemical factory 4E-10 plant 4E-10 plant Estimated by EI3.0 from a large chemical plant Byproducts Alumina −0.0157 kg Petro: Estimated by author based on stoichiometry Solid waste Solid waste incinerated 0.00339 kg 0.0195 kg Petro: literature value by EI3.0; PKO: Adapted data from ECOSOL study by EI3.0 Direct air emissions Carbon dioxide, fossil 6.1E-5 kg PKO: estimated by EI3.0 based on ECOSOL study data Non-methane volatile organic compounds 2.05E-4 kg PKO: estimated by EI3.0 based on ECOSOL study data Particulates, >10 µm 7.95E-6 kg Petro: literature value by EI3.0 Particulates, 2.5–10 µm 1.07E-5 kg Petro: literature value by EI3.0 Particulates, <2.5 µm 6.21E-6 kg Petro: literature value by EI3.0 Nitrogen oxides 2.06E-4 kg Petro: literature value by EI3.0 Ammonia 1.68E-5 kg Petro: literature value by EI3.0 Water vapor 0.0105 kg 0.0105 kg EI3.0 calculated value based on literature values and expert opinion Sulfur dioxide 7.5E-4 kg Petro: literature value by EI3.0 Carbon monoxide, fossil 1.41E-4 kg Petro: literature value by EI3.0 Direct water emissions Wastewater effluent 0.0195 m3 0.0195 m3 EI3.0 calculated value based on literature values and expert opinion Ammonium, ion 8.42E-6 kg Petro: literature value by EI3.0 COD, chemical oxygen demand 1.2E-4 kg 1.33E-3 kg Petro: calculated by EI3.0 as 2*BOD5; PKO: Adapted data from ECOSOL study by EI3.0 TOC, total organic carbon 4.45E-5 kg 4.93E-4 kg Calculated by EI3.0 as COD/2.7 where COD is measured in g O2 While this EI3.0 petro-FA process is fairly comprehensive, the dataset is for technology in mid-1990s as practiced in Europe for the "Rest of World" (RoW) region. The transportation impacts are based on the average distances and the commodity flow surveys. It is unclear how the various byproducts and wastes streams are handled. In order to address these concerns, the original dataset from EI3.0 has been modified as per the following discussions. Petro-FA Upstream Since the dataset is for a different region other than the US, there could be an effect on the results due to potential differences in the production process, difference in the electricity grid mix and heat generation mix for FA production, the differences in the transportation and so on. The dataset for petro-FA in EI3.0 for RoW region was generated via modification of the Europe region by updating the electricity grid mixes, transportation impacts and heat generation impacts. The dataset description is said to be valid from 1995 till 2013. The approach used by EI3.0 has been adapted to obtain a dataset for the US gulf coast region. The electricity grid mix was updated to Southeastern Electric Reliability Council (SERC). The heat generation process used in the petro-FA dataset and the raw material n-olefin production dataset were changed to "Heat, central or small-scale, natural gas {SERC}| heat production, natural gas, at boiler condensing modulating <100 kW | Alloc Def, U". This dataset for heat was derived from that for Switzerland ("Heat, central or small-scale, natural gas {CH}| heat production, natural gas, at boiler condensing modulating <100 kW | Alloc Def, U") provided by SimaPro 8.0 by updating the natural gas source to be from North America, the emissions profile for CO2, CO, CH4, N2O, NOX, SO2, lead, mercury and PM10 as per NREL data [17] and electricity to SERC grid. Based on the AE production facility location, it is expected that the natural gas produced in US is delivered via pipeline to the FA manufacturing facility in the Gulf Coast region of US for petro-FA. This petro-FA is expected to be delivered via truck to AE manufacturing facility. The transportation distances for FA production facility to AE production facility are estimated to be ~60 km for the respective plants located in US Gulf Coast region. The transportation is expected to be entirely via diesel combination trucks. The crude oil and natural gas resources require some land transformation and occupation for the drilling and other auxiliary processes. Further, the chemical plants for the processing of these and the intermediates also require land use. For the latter, the dataset "Chemical factory, organics {GLO}| market for | Alloc Def, U" has been included by datasets in EI3.0. For the former, the impacts are included in the datasets as well [4]. However, the impacts from the process steps are not split up due to the format of data availability. Hence, the impacts from land use change and the waste from drilling operation are accounted for in this process rather than via separate upstream process. Overall, the cradle-to-gate impacts are included. Petro-FA Catalysts Both Ziegler and Oxo routes use catalysts. EI3.0 process for petro-FA does not have aluminum powder and a hydrocarbon solvent as input and alumina hydrate as co-product applicable for the Ziegler process. Alumina hydrate has value in catalytic processes, in ceramics and other industrial applications. Since the solvent is recovered and recycled, exclusion is reasonable. With aluminum powder and alumina hydrate, there is no indication that the corresponding impacts are included. Hence, a separate dataset was created and included to account for the upstream (Raw material to Gate) impacts. SimaPro 8.0 doesn't have any dataset for aluminum powder used for Ziegler process. This dataset, hence, was modeled with "Aluminium, primary, ingot {GLO}| market for | Alloc Def, U" EI3.0 dataset as a starting point. Aluminum powder is expected to be produced via gas atomization of molten ingot. The energy needed for melting (H melt) is the primary consideration here and was estimated in J/g as per following equation from [18] $$ H_{\text{melt}} = C_{\text{s}} \times \left( {T_{\text{m}} - T_{0} } \right) + H_{\text{f}} + C_{\text{l}} \times (T_{\text{p}} - T_{\text{m}} ) $$ (1) where C s is the weight specific heat for solid Aluminum (0.91 J/g/°C), T m is the melting temperature of Al = 600 °C, T 0 is the starting temperature (25 °C assumed), H f is the heat of fusion for Al (10,580 J/mol [18]), C l is the weight specific heat of the molten Al (1.086 J/g/°C), T p is the pouring temperature (1700 °C [19]). 120 % multiplication factor was used as per [18] to account for energy losses. The resulting energy is estimated to be about 90 % of total energy need as additional energy is needed in holding furnace [49]. Argon gas is expected to be used here. The volume of Argon for atomization of Ti6Al4 V from the literature [20] was adjusted for Al atomization [18]. The cooling water consumption was estimated as per process specification for "Industrial Metal Powder Aluminum Powder Production Line" [19]. As per Zeigler reaction stoichiometry, 1 mol of Al yields 3 mol of FA translating into 0.05 kg Al for 1 kg FA. Similarly, one mole of alumina hydrate is produced per mole of Al translating into 0.11 kg alumina hydrate per kg FA produced. The credits from Alumina co-product is as per dataset "Aluminium oxide {GLO}| market for | Alloc Def, U" for EI3.0. For the Oxo process, the cobalt carbonyl (HCo(CO)4) catalysts are used in 0.1–1.0 wt% concentration.2 The loss for catalyst is estimated to be <1 % [23]. This translates into 0.343–3.43 mg of Co need per kg of product. The impacts for the catalyst were accounted through "Cobalt {GLO}| market for | Alloc Def, U" EI3.0 dataset. Petro-FA Process Technology EI3.0 dataset for petro-FA is based on 18 % Ziegler route production and 82 % Oxo route production as per mid-1990s data. The current validity of this split was confirmed. In 2000, about 1.68 million metric tonnes of fatty alcohol was produced with 40 % being petro-FA [24]. The petro-FA production capacity in 2000 were estimated to 0.273 million tonnes for Shell's Geismar LA plant [24], 0.17 million tonnes for BASF's oxo-alcohol plant in Ludwigshafen [24], 0.10 million tonnes increase capacity for Sasol's oxo-alcohol [24] and 0.06 million tonnes for BP [25]. These translate into 0.603 million tonnes of oxo-alcohol capacity, which would account for 90 % of petro-FA produced in 2000. In 2010, 90 % capacity utilization was estimated [26]. Considering new capacity installation between 2000 and 2005 (see discussion for 2005 below), this utilization rate should be reasonable and at such utilization rate, accounted oxo-alcohols formed about 81 % of petro-FA in 2000. It must be noted that base oxo-chemical capacity of Sasol is not accounted here due to lack of information. So, the split between oxo-route and Ziegler-route holds till 2000 and any small perturbation in this split does not significantly change the overall environmental impact of petro-route. In 2005, 2.2–2.5 million tonnes of fatty alcohol production capacity has been estimated with 50 % being petro-FA [26]. The petro-FA production capacity in 2005 were estimated to 0.49 million tonnes for Shell [25, 27], 0.31 million tonnes for BASF [27], 0.25 million tonnes capacity for Sasol's oxo-alcohol [28, 29] and 0.0 million tonnes for BP3 [25]. These translate into 1.05 million tonnes of oxo-alcohol capacity, which would form 86 % of petro-FA capacity in 2005. Similar to 2000, the split between oxo-route and Ziegler-route holds till 2005. In 2012, the total fatty alcohol capacity has been estimated to be 3.35 million tonnes with all of 0.8 million tonnes of capacity increase for oleo-FA [26]. Again, the split between oxo-route and Ziegler-route holds till 2012. Petro-FA Process Byproducts Both Ziegler route and Oxo route generate byproducts. With the Oxo route, ~5 wt% of olefin feed gets converted to byproducts [22], 5–10 wt% of olefins remains unreacted [30, 31] and ~2 mol% of aldehydes being unreacted during hydrogenation [32]. These unreacted materials and byproducts are distilled out with unreacted olefins recycled to hydroformylation stage and unreacted aldehydes to hydrogenation stage [33]. The light ends are either used as high grade fuel or blend stream for gasoline [33, 34]. The heavy ends are either used as fuel or solvents [31, 33]. It is difficult to tell whether the existing EI3.0 dataset for petro-FA has assigned the byproducts as fuel substitute, co-products, mixture or not at all. Considering the small amount of concern here, the choice here is not expected to impact the final conclusion within the scope of this study. With the Ziegler route, besides for alumina hydrate discussed in the catalysts section, a small percentage of olefins form alkanes and oxygen-containing compounds as byproducts [16]. During the fractionation of crude alcohol formed, these byproducts could either be separated as waste or become part of certain blends. Considering the small amount of concern, the choice is not expected to impact the final conclusion within the scope of this study. Further, the EI3.0 dataset for petro-FA does account for some wastes that get incinerated. PKO-FA The oleo-FA can be produced either via fatty acid splitting route ("Lurgi direct hydrogenation" of fatty acids obtained by splitting triglycerides from crude vegetable oil) or transesterification route (hydrogenation of methyl esters obtained by transesterification of crude or refined vegetable oil) [4]. In this study, the scope for the raw materials is limited to PKO and the production route limited to fatty acid splitting, esterification of refined PKO and esterification of crude PKO processes. In 2005, ~44 % of global palm fruit were produced in Malaysia (MY) [11]. Hence, PKO is expected to be produced in Malaysia and delivered via truck to FA manufacturing facility in Malaysia. The resulting PKO-FA is then via combination of truck-ship-truck to AE manufacturing facility in the US. EI3.0 dataset for PKO-FA production ("Fatty alcohol {RoW}| production, from palm kernel oil | Alloc Def, U") includes inputs and emissions reflecting a technology mix of 27 % produced from fatty acid splitting, 56 % produced from methyl ester on the basis of crude vegetable oil and 17 % from methyl ester out of refined oil. This dataset includes the material and energy inputs (methanol, palm kernel oil, natural gas and hydrogen), emissions to air and water, transportation and production of waste. Both processes (Fatty Acid splitting and transesterification) yield ~40 wt% of PKO as glycerin. Fatty Acid splitting also yields some short-chain (C8–C10) fatty alcohols, which could be estimated to be ~5 wt% based on the average fatty acid composition for PKO [35]. For transesterification process, when the PKO is refined first, ~5 wt% of PKO results in fatty acid distillate [36]. All these by-products have value. The mass-based allocations made in EI3.0 datasets for these multioutput processes were kept. Further, it must be noted that this 'gate-to-gate' process also includes the impacts from some upstream processes (see PKO-upstream section). Table 1 summarizes the gate-to-gate LCI for PKO-FA production. While this EI3.0 PKO-FA process is fairly comprehensive, the dataset is for the "Rest of World" (RoW) region with palm kernel oil sourced globally. For this study, PKO sourcing region of interest is Malaysia. Similar to petro-FA dataset in EI3.0, the transportation impacts are based on the average distances and the commodity flow surveys. In order to address these concerns, the original dataset from EI3.0 has been modified as per the following discussions. PKO-FA Upstream Datasets The dataset for PKO-FA in EI3.0 for RoW region was generated via modification of the one for Europe by updating the electricity grid mixes, transportation impacts and heat generation impacts. Such dataset is said to be valid from 2011 till 2013 as per dataset description. This approach used by EI3.0 has been adapted here to obtain a dataset for Malaysia. Since FA is produced at a facility in Malaysia, the electricity grid mix from EI3.0 dataset for PKO-FA is updated from global electricity mix to "Electricity, medium voltage {MY}| market for | Alloc Def, U". The heat generation process used in the PKO-FA dataset was changed to "Heat, central or small-scale, natural gas {MY}| heat production, natural gas, at boiler condensing modulating <100 kW | Alloc Def, U". This dataset for heat was derived from that for Switzerland ("Heat, central or small-scale, natural gas {CH}| heat production, natural gas, at boiler condensing modulating <100 kW | Alloc Def, U") provided by SimaPro 8.0 by updating the natural gas source to be from "Rest of World" (due to lack of dataset for natural gas from MY) and electricity to MY grid. The transportation distances for FA production facility to AE production facility is estimated to be ~20,000 km for the transoceanic shipment from Malaysia to US Gulf coast via Panama. Also, the truck transportation of ~60 km is expected between the ports and production facilities. Here, the transportation impacts for the various feedstock materials and waste are considered in terms of distance to be traveled, the amount to be transported, and the mode of transportation. The capital goods and infrastructure needed for the production and transportation are only considered when already covered in EI3.0 and other datasets used in SimaPro 8.0. For methanol production related impacts, the natural gas resources (from which methanol is derived) were used. Such natural gas resources require some land transformation and occupation for the drilling and other auxiliary processes. Further, the chemical plants for the processing of these and the intermediates also require land use. For the latter, the dataset "Chemical factory, organics {GLO}| market for | Alloc Def, U" has been included by datasets in EI3.0. For the former, the impacts are included in the datasets as well [4]. However, the impacts from the process steps are not split up due to the format of data availability. Hence, the impacts from land use change and the waste from drilling operation are accounted for in this process rather than via separate upstream process. Overall, the cradle-to-gate impacts are included. In the existing EI3.0 dataset for PKO-FA, the raw material production datasets are for global region. The PKO production dataset was updated so that 100 % of PKO was sourced from Malaysia. PKO is a co-product of palm oil production from the palm fruits produced as 10–40 kg Fresh Fruit Bunches (FFB) on the palm trees [11]. The growing of these trees (and, hence, the production of palm fruits) require the transformation of land for palm plantations initially, and then occupation of this land [11]. The palm plantations yield on average ~25 tonnes FFB per hectare [11]. FFB consists of ~22 wt% empty fruit bunches (EFB), ~65 wt% fleshy mesocarp (pulp) and ~13 wt% in an endosperm (seed) in the fruit (Palm Kernel). The mesocarp provides Palm Oil (PO) while the seed provides Palm Kernel Oil (PKO). The yield is ~22 wt% of FFB results in PO, ~2.7 wt% in PKO and ~3.3 wt% in Palm Kernel Extract (PKE). The kernel is protected by a wooden endocarp or Palm Kernel Shell (PKS). The solid waste left after the extraction of oils, including the fibers in pulp (~15 wt%), PKS (~7 wt%) and EFB, could be re-used as fuel substitute in energy generation and as fertilizer substitute via mulching. There is also liquid waste generated from the wastewater produced during the processing in oil mills. This wastewater effluent, termed Palm Oil Mill Effluent (POME), contains hydrocarbon contents (water and ~28 wt% of FFB) that could be repurposed for fertilizer substitute or recovered for fuel substitute. There are also air emissions due to the fuel combustion for energy generation. These various aspects for PKO can be seen in Fig. 2. The economic allocation with allocation factor of 17.3 % to PKO as used in EI3.0 dataset was used to allocate the impacts and credits between PO and PKO. Even though the allocation values are based on 2006 prices, they were found to be valid based on the prices in 2014 [37, 38]. EI3.0 dataset for palm plantations accounts for the benefits/impacts from growing palm trees such use of CO2 from air. Open image in new window Fig. 2 Process steps for production of Palm Kernel Oil and average inputs and outputs (adapted from 10) About 400 m2 of land, diesel, pesticides, fertilizer and water are major inputs required to produce 1000 kg Fresh Fruit Bunches (FFB). Processing of thus produced FFB in Palm Oil Mill takes diesel and about 540 kg water. Also, about 3.4 GJ of energy in form of steam and electricity is needed, which is obtained through use of shells and fibers generated from the oil mill. About 150 kg of fiber and about 70 kg of shells are generated. Also, about 225 kg of empty fruit bunches (EFB) are generated, which are either mulched for fertilizer substitute for plantations or dumped to rot. About 829 kg of POME (effluent from palm oil mill) is generated and disposed of either via untreated river discharge or anaerobic digestion of BOD present. The methane from digestion could be used for energy generation, flared or vented. Of the remaining mass of inputs to the oil mill, about 215 kg becomes palm oil, about 27 kg becomes palm kernel oil (PKO) and rest about 33 kg becomes palm kernel extract (PKE) used as animal feed substitute. The treatment options for EFB and POME are the decision points for the individual plantations and shown via + symbol EI3.0 dataset for palm kernel oil production accounts the end-of-life treatments for the EFB, PKS and PKF via their combustion for supplying steam and electricity for the oil mills. The literature survey indicates that only PKS and PKF are used as fuel [39] and provide more than sufficient energy for oil mills [39]. EFB has been cited as "a resource which has huge potential to be used for power generation, currently not being utilized" [39]. The treatment of POME in EI3.0 is as standard wastewater. Recent publications [40] cited methane leaks from palm oil wastewater as a climate concern. In order to account for these differences, existing EI3.0 dataset for palm kernel oil was updated and new datasets were created to capture these differences in waste treatment. The screening level analysis suggested that PKO raw material is the single largest GHG contributor for PKO-FA accounting for the differences in GHG emissions compared to petro-FA. Hence, PKO production (including palm plantations and oil mills) processes were evaluated in details as discussed below. POME Treatment Options The end-of-life treatment for the POME could be discharge into a river without any treatment, after anaerobic digestion of organics with venting of thus-produced methane, after anaerobic digestion of organics with flaring of methane produced, or after anaerobic digestion of organics with recovery of methane for energy. The end-of-life treatment for the POME is expected to impact the pollution from the discharge of organics, generation of methane and CO2 from organics discharge and from the discharge of nitrogen compounds. The organics emissions were estimated as per the following equation: $$ {\text{OM}}_{\text{POME}}^{\text{emitted}} {\text{ = COD}}_{\text{POME}} $$ (2) where CODPOME is the Chemical Oxygen Demand generated from discharge of organics in POME. The methane emissions were estimated as per the following equation: $$ {\text{CH4}}_{\text{POME}}^{\text{emitted}} = {\text{COD}}_{\text{POME}} \times B_{0} \times {\text{CF}}_{\text{CH4}} $$ (3) where B 0 is the methane producing capacity from the organics discharged and CFCH4 is the correction factor to the methane production capacity based on the conditions into which organics are discharged. The nitrogen emissions were estimated as per the following equation: $$ {\text{N}}_{\text{POME}}^{\text{emitted}} {\text{ = Ncontent}}_{\text{POME}} $$ (4) where NcontentPOME is the nitrogen content discharge in the river depending on whether POME is treated or not. The values used for the parameters in Eqs. (2)–(4) for the various end-of-life treatment scenarios as per Achten et al. 2010 [41] can be found in Table S1. The emissions avoided from use of captured biogas for heat were estimated via EI3.0 dataset for cogen ("Heat, at cogen 50kWe lean burn, allocation heat/CH U"). The emissions from flaring of captured biogas were estimated via EI3.0 dataset for Refinery gas flaring ("Refinery gas, burned in flare/GLO U"). The literature survey showed that the lack of demand for thermal energy and limited/missing access to the national electricity grid has resulted in only ~30 % of palm oil mills recycling POME [10, 42] and only 5 % of POME gets treated to generate biogas for heat production with the rest 95 % being treated to just vent the generated biogas as shown in Table S2 [43]. Hence, a sensitivity analysis was done with the various disposal options for POME. PKS & PKF Treatment EI3.0 dataset for palm kernel oil production accounts the direct emissions from the combustion of PKS and PKF via modified 'wood chips, burned in a cogen 6400 kWth process. The modification of the 'wood chip' process accounts for the differences in dry matter, carbon content and the energy content. In this original EI3.0 approach, about 12.8 MJ of energy is generated per kg of oil produced. Of this, about 8.2 MJ is obtained from PKS and PKF. Approximately 7.84 MJ energy requirement for oil mill operation is reported in literature [10, 39, 44, 45, 46]. This aligns with Abdullah and Sulaiman 2013 observation that PKF and PKS are sufficient to meet oil mill's energy demand [39]. Hence, the combustion impacts from original EI3.0 dataset were reduced to produce only 8.2 MJ. While this might be slightly in excess, it is expected that excess PKF & PKS will be treated the same way for convenience. EFB Treatment Options For Malaysia, 75 % of the time EFB is expected to be mulched and for the rest 25 % dumped to rot [43]. EFB rotting was based on the modeling done by Stichnothe and Schuchardt (2011) [10], which is based on IPCC guideline for estimating GHG emissions from parks and garden waste. For rest of the nutrients, 50 % leaching was assumed, except 90 % leaching for potassium based on Rabumi (1998) [47]. The initial nutrient values for EFB are shown in Table S3. For mulching, the dataset in Simapro 8.0 was used and the fertilizer value of the mulch was estimated based on literature data [44, 47, 48, 49, 50] shown in Table S4. The mulching process was captured through EI3.0 dataset ("Mulching {GLO}| market for | Alloc Def, U") and about 10 km trucking was assumed [44]. The recycling of EFB was similar to the POME recycling situation [10, 42]. Hence, the sensitivity analysis was done with the various disposal options for EFB to evaluate the impacts from 100 % (ideal) and 0 % (the worst case) mulching. Land Use Change Options As discussed earlier, palm plantations require land. This needed land could be from secondary forests, existing cropland, primary tropical forest and/or peatland. The transformation of this land from its current primary function to another function constitutes a land use change (LUC). LUC has significant environmental implications due to biodiversity impacts, water flow impacts, soil erosion impacts, GHG emissions and such. With respect to GHG emissions, the impacts are due to disruption or destruction of carbon stocks in above ground biomass (AGB), below ground biomass (BGB), soil and dead organic matter (DOM) along with N2O stock for peatland [10]. "The impact of LUC depends on various factors such as cultivation methods, type of soil and climatic conditions" [10]. For this study, the land transformation from the existing cropland, primary tropical forest, peatlands and secondary forest have been evaluated with the base case being the current practices in Malaysia (Table S5). The literature survey indicated that "peatland makes up 12 % of the SE [South East] Asian land area but accounts for 25 % of current deforestation. Out of 270,000 km2 of peatland, 120,000 km2 (45 %) are currently deforested and mostly drained" [10] presenting a case for sensitivity with LUC. The impacts from indirect LUC4 have been excluded from this study similar to earlier studies [41, 51] as we did not find any studies with the required data or methodology. Currently, EI3.0 has datasets for existing cropland ("Palm fruit bunch {MY}| production | Alloc Def, U") and primary tropical forest ("Palm fruit bunch {MY}| production, on land recently transformed | Alloc Def, U") in SimaPro 8.0. The new datasets were created in SimaPro 8.0 for various types of land transformation by adjusting the value for "Carbon, organic, in soil or biomass stock" in primary tropical forest dataset. The values for secondary forest were derived by taking the ratio of primary forest and secondary forest in respective EI3.0 datasets for other regions. For peatland covered with primary forest, the values were assumed to be same as those for primary forest with extra BGB that gets drained. The value for BGB for peatlands were updated based on literature surveys [45, 51]. These adjustments (see Table S5) for the LUC, which are not covered in the datasets in SimaPro 8.0, only captures the GHG emissions related differences. Assumptions in relation to the data: 1. Existing EI3.0 dataset for PKO production does not include negative impacts from EFB rotting, fertilizer use reduction from EFB mulching (benefit) and POME's CH4 emissions. 2. No transportation losses. 3. Impacts from LUC are spread over 20 years. The inventory data collected for petro-FA and PKO-FA along with assumptions capture the quantity of inputs and outputs of materials, energy, waste and emissions for the respective process. This inventory was converted to the functional unit basis (1 kg of FA delivered to AE production site). Such inventory (LCI) was modeled into SimaPro 8.0 software and then subjected to impacts assessment to understand and evaluate the potential environmental impacts by converting LCI results into impacts and aggregating these impacts within the same impact category to obtain the characterized results. ReCiPe Midpoint (H) method as implemented in Simapro 8.0 was used to obtain the characterized results for 18 impact categories. This method by default neither credits for CO2 intake from air for plant growth nor penalizes for biogenic CO2 emissions. In biofuel processes, since the CO2 intake by the plants is ultimately released with energy back into the atmosphere within a short timeframe, the credits and emissions balances out to carbon neutrality. However, in this case, the carbon intake is stored in the chemical products for a long time and may not necessarily be released as CO2 like combustion processes. Further, since FA end-of-life is out of scope in this cradle-to-gate study, CO2 intake needs to be included. Hence, the method was updated to account for CO2 intake and biogenic CO2 emissions. Also, the biogenic methane GWP factor was changed from 22 to 25 kg CO2e. The contribution analyses of the characterized results were performed to understand the hotspot areas of impacts and identify the key factors. For these key factors, the sensitivity analyses were performed to evaluate the various scenarios of LUC, POME end-of-life treatment and EFB end-of-life treatment. The uncertainty analyses were performed for both FA sourcing options for the base case via Monte Carlo sampling to understand the distribution. The number of samplings used was 1000 for both options. Results Both petrochemical feedstocks and PKO feedstocks used for FA production are co-products and have other uses. For example, only a fraction of crude oil is used as feedstocks for FA production. This crude oil, which is derived as co-products, could be used for other applications such as energy. Similarly, PKO is co-product from PO production and could be used for other applications such as biodiesel or cooking oil. In other words, both feedstocks are part of large and complex supply chain. For each kg of FA delivered, on a cradle-to-gate basis, petro-FA has ~2.97 kg CO2e emissions on average, which are ~55 % of ~5.27 kg CO2e emissions for PKO-FA on average (see Fig. 3). For petro-FA, the production of various raw materials contributed ~79 % of the total ~2.97 kg CO2e/kg FA delivered. Another ~21 % are from FA production and <0.2 % from transportation of raw material for FA production and of FA for AE production. Almost all of the GHG emissions during petro-FA production are from the combustion of natural gas in the US. Of climate change impacts from raw materials, ~70 % is from n-olefins production and delivery, ~10 % from ethylene production and delivery, ~10 % from upstream fuel production/combustion, ~8 % from catalysts (aluminum powder and cobalt), and the ~2 % remaining from solid waste handling and chemical plant infrastructure. For PKO-FA, the production of various raw materials contributes ~83 % of the total ~5.27 kg CO2e/kg FA delivered. Another ~12 % are from FA production and ~5 % from transportation of raw materials for FA production and of FA for AE production. Almost all of the GHG emissions during PKO-FA production are from the combustion of natural gas in MY. Due to lower GHG intensity for the combustion of natural gas in MY, the production GHG emissions are similar to petro-FA despite twice the thermal heat consumption. Of climate change impacts from raw materials, ~91 % are from PKO production, ~7 % from upstream fuel production/combustion, and the rest split between those from hydrogen production and delivery, chemical plant infrastructure and those from municipal solid waste. Open image in new window Fig. 3 Contributions of various life cycle phases to the Life cycle GHG emissions for PKO-FA (fatty alcohol produced from palm kernel oil feedstock) and petro-FA (fatty alcohol produced from petrochemical feedstock) are shown in kg CO2e/kg FA delivered. The various life cycle phases shown here are RMProdC2G, Transport C2G and FAProdG2G. RMProdC2G includes the raw material production (includes the impacts from the transformation of inputs from nature via various intermediate products into the raw material delivered to the fatty alcohol (FA) production site. RMC2G also includes any transportation required till RM reaches the FA production site. FAProdG2G includes the production of FA from raw materials (e.g., PKO and n-olefins and ethylene). TransportC2G includes the transportation of FA produced from the FA production site to Alcohol Ethoxylates (AE) production site. Irrespective of the feedstocks, RMProdC2G is the most impactful phase for the boundary covered in this study. It accounts for 60+ and 75+ % of the life cycle GHG emissions for PKO-FA and petro-FA, respectively The contribution analyses for climate change suggest that land use change, POME treatment and EFB treatment are critical factors for life cycle GHG emissions from PKO-FA production. The results of sensitivity analyses for these three key parameters are summarized in Fig. 4. EFB could be mulched and used as fertilizer or dumped to rot. In the latter case, methane, carbon dioxide and nitrous oxide could be emitted depending on the anaerobic conditions. This translates into mulching of EFB for fertilizer being a better option. Among the evaluated POME end-of-life treatment options, anaerobic treatment with the resulting methane recovered and utilized for heat generation has the least life cycle GHG emissions. The venting of methane from anaerobic treatment has the most GHG emissions, even higher than discharging untreated POME. When LUC options are considered, GHG emissions are the highest when peat forests are transformed for palm cultivation and the lowest when existing croplands (whose carbon debt has been paid off)5 are transformed. The sensitivity analyses show that PKO-FA has lower GHG emissions with petro-FA from an environmental perspective if the existing cropland is used for palm plantation instead of land transformation. Further, in such scenario, CO2 could be sequestered compared to petro-FA. In an ideal situation when PKO is entirely produced on existing cropland, POME is being treated with methane recovered for thermal energy generation and EFB is used for mulching to replace some fertilizer needs, PKO-FA have GHG emissions of approximately −1.5 kg CO2e/kg FA delivered, thereby outperforming petro-FA. However, if 100 % of PKO comes from peatlands drainage and deforestation, POME is treated with recovered methane vented, and EFB is dumped to rot under anaerobic conditions, the GHG emissions increase to ~16.7 kg CO2e/kg FA delivered. Open image in new window Fig. 4 Results of various sensitivity analyses, namely, land use change (LUC), POME (wastewater effluent from palm oil mill) treatment, and EFB (empty fruit bunches) treatment, are shown in kg CO2e/kg FA delivered. The base case MY mix GHG emissions represent the typical practices for palm plantations in Malaysia (MY). For LUC, the practices for the base case are 13 % LUC from peat forest, 52 % from secondary forest and rest 35 % from existing cropland. Peat forest has the most GHG emissions, while they are the least for the transformation of existing cropland with carbon debt paid off. For POME treatment, the practices for the base case are 5 % of POME being used for generation of biogas for heat production and the rest 95 % being treated emitting the resulting biogas. The venting of biogas from anaerobic treatment has the most GHG emissions, while the anaerobic treatment with the resulting methane recovered and utilized for heat generation has the least. For EFB treatment, the practices for the base case are 75 % of EFB mulched and rest 25 % dumped to rot. Mulching of EFB for a fertilizer substitute shows the least life cycle GHG emissions, while the dumping of EFB to rot has the most Among the other impact categories, PKO-FA has less metal depletion, less fossil depletion, less human toxicity, less ionizing radiation emissions, less metal depletion, less ozone depletion and less water depletion on average (see Table 2). While LUC affects most other impact categories (except terrestrial ecotoxicity and agricultural land occupation), among them natural land transformation, marine eutrophication, particulate matter formation and photochemical oxidant formation see significant effects. Urban land occupation and water depletion are also affected. While GHG emissions for discharging of POME without treatment is not significant, the impacts on eutrophication from this option is ~100 times more than other options. Besides impacts on climate change and eutrophication, the POME treatment options also affect terrestrial ecotoxicity, particulate matter formation, photochemical oxidant formation, human toxicity and terrestrial acidification. The treatment of EFB impacts all impact categories as all of them show a positive environmental profile for mulching compared to burden for all impact categories when dumped to rot. Table 2 Comparing mean values (and coefficient of variation) results of all impact categories Impact category Unit Petro FA PKO FA Average SD Average SD Agricultural land occupation m2a 4.25E-02 3.17E-02 1.82E+ 00 6.12E-01 Climate change kg CO2 eq 2.97E+00 5.23E-01 5.27E+00 4.57E+00 Fossil depletion kg oil eq 1.84E+00 3.93E-01 5.99E-01 1.30E+00 Freshwater ecotoxicity kg 1,4-DB eq 2.14E-02 2.09E-01 4.16E-02 6.08E-01 Freshwater eutrophication kg P eq 4.58E-04 3.03E-04 6.29E-04 1.85E-03 Human toxicity kg 1,4-DB eq −7.75E-02 5.64E+01 −9.19E+00 3.21E+02 Ionizing radiation kBq U235 eq 1.56E-01 1.84E-01 8.78E-02 4.37E-01 Marine ecotoxicity kg 1,4-DB eq 1.78E-02 1.69E-01 1.95E-02 4.18E-01 Marine eutrophication kg N eq 3.29E-04 6.92E-05 1.30E-02 4.39E-03 Metal depletion kg Fe eq 1.31E-01 8.07E-02 1.22E-01 8.87E-01 Natural land transformation m2 2.02E-04 1.26E-04 3.49E-02 1.14E-02 Ozone depletion kg CFC-11 eq 1.02E-07 4.67E-08 5.86E-08 3.60E-07 Particulate matter formation kg PM10 eq 3.70E-03 7.72E-04 8.59E-03 9.55E-03 Photochemical oxidant formation kg NMVOC 1.32E-02 3.28E-03 1.56E-02 2.34E-02 Terrestrial acidification kg SO2 eq 1.22E-02 3.11E-03 1.79E-02 2.29E-02 Terrestrial ecotoxicity kg 1,4-DB eq 1.03E-04 1.45E-03 1.39E-01 3.95E+00 Urban land occupation m2a 1.04E-02 3.37E-03 2.42E-02 2.80E-01 Water depletion m3 2.71E+00 5.27E-01 1.37E+00 7.25E+00 SD standard deviation The uncertainty analyses were performed to obtain the distribution of the environmental impacts for both petro-FA and PKO-FA. The results for all 18 evaluated impact categories have been captured in Fig. 5 via density plots. In these density plots, the broader distribution for an impact category represents higher uncertainty. For PKO-FA, the distributions of impacts for all impact categories are broader compared to the narrow distribution for petro-FA. The higher uncertainty for PKO-FA is from the variations in the practices with palm plantations and oil (palm oil and PKO) production processes. Further, the higher overlap area for an impact category in density plots represents a lower difference between the compared options. Marine eutrophication, agricultural land occupation, natural land occupation, fossil depletion, particulate matter formation, water depletion and climate change have the least overlapped area and, hence, have the largest difference in the impacts between petro-FA and PKO-FA. The extent of overlap in distribution can also be represented as the percentage of samplings for which a particular option had lower impacts. For example, petro-FA has lower or equal GHG emissions for ~70 % of samplings and PKO-FA causes lower or equal water depletion for ~60 % of samplings. Figure 6 summarizes the results of such representation for PKO-FA being better and/or equal to petro-FA for all 18 impact categories. Open image in new window Fig. 5 Results of uncertainty analyses (1000 runs of Monte Carlo using the in-built function in Simapro 8.0) for characterized impacts for PKO-FA (fatty alcohol produced from palm kernel oil feedstock) and petro-FA (fatty alcohol produced from petrochemical feedstock) are presented for all 18 evaluated impact categories as density plots. For PKO-FA, the distributions of impacts for all impact categories are broader compared to the narrow distribution for petro-FA. Marine eutrophication, agricultural land occupation, natural land occupation, fossil depletion, particulate matter formation, water depletion and climate change have the largest difference in the impacts between petro-FA and PKO-FA Open image in new window Fig. 6 Results of uncertainty analyses (1000 runs of Monte Carlo using the in-built function in Simapro 8.0) for characterized impacts for PKO-FA (fatty alcohol produced from palm kernel oil feedstock) and petro-FA (fatty alcohol produced from petrochemical feedstock) are presented for all 18 impact categories as a percentage of the samplings for which a particular option had lower impacts. For example, petro-FA has lower or equal GHG emissions for ~70 % of samplings and PKO-FA causes lower or equal water depletion for ~60 % of samplings Discussion Both the petrochemical and PKO feedstocks being part of large and complex supply chains is expected and documented in the literature [6, 8]. Our GHG emissions results are in alignment with the literature evaluating the similar claims for palm oil (PO) for other fossil resource substitutions. While on average PKO-FA performs worse, life cycle GHG emissions for PKO-FA could be lower than those for petro-FA under limited conditions as per sensitivity analyses. Such significant variances in the GHG emissions for PKO-FA (observed from uncertainty analyses and sensitivity analyses) are in accordance with the results of previous studies [10, 11, 41, 45, 51, 52] summarized in Fig. 7. These variances are expected due to the variances in agricultural and forestry practices such as fertilizer applications, pesticides applications, properties of soil, growth rate (and, hence CO2 absorption) for the plants and handling of biomass and co-products. Hence, the environmental friendliness of PKO-FA for GHG emissions reduction varies with the actual practice, which is in consensus with findings by Reijnders and Huijbergts [45]. Land use change, POME end-of-life treatments and EFB end-of-life treatments are key parameters, which were also observed in previous studies [10, 45]. Open image in new window Fig. 7 Literature data on the life cycle GHG (greenhouse gas) emissions for oil produced from Palm fruit in kg CO2e/kg oil produced. Depending on the operating practices, the GHG emissions as per this LCA study varies from −2.7 to 15.4 kg CO2e/kg oil produced. Such significant variances in the GHG emissions for PKO-FA were also observed by Stichnothe and Schuchardt [10] (0.6–22.2 kg CO2e/kg oil produced), Achten et al. (0.4–16.9 kg CO2e/kg oil produced) [17] and Schmidt and Dalgaard [29] (2.2–12.7 kg CO2e/kg oil produced). While the variances observed by Rejinders and Huijbergts [25] (5.2–9.6 kg CO2e/kg oil produced) and Wicke et al. [21] (1.3–3.1 kg CO2e/kg oil produced) were not equally large, their ranges are within those observed. The potential emissions estimated by Jungbluth et al. [11], as part of EcoInvent 3.0 dataset, also falls within the observed ranges The selection of raw material sourcing for FA production involves trade-offs as PKO-FA performs better on average in six impact categories while petro-FA performs better on average in another 12 impact categories. Such trade-offs have been observed by Stalmans et al. [6] and are expected due to inherent differences between the bio-based value chain and the fossil-based value chain. Marine eutrophication, agricultural land occupation, natural land occupation, fossil depletion, particulate matter formation, water depletion and climate change are key impact categories for the considered FA sourcing options as shown in Table 3. Table 3 List of processes that are major contributors for the identified impact categories for both petro-FA and PKO-FA Petro-FA PKO-FA Climate change N-Olefin Ethylene Fuel combustion (process) Aluminum powder Chemical plant (<5 %) Land use change for palm plantation Palm plantation operation Fuel combustion emissions (transportation) Fossil fuel depletion N-Olefin Ethylene Fatty alcohol production (<5 %) Fuel combustion (process) Aluminum powder (<5 %) Fuel combustion (process, transportation) Electricity production Natural land transformation Oil extraction for n-olefins, ethylene and fuel for transportation NG extraction for fuel for process Chemical plants Aluminum powder Land use change for palm plantation Land use change for oil extraction (<0.3 %) Eutrophication Sulfidic tailings for production of Copper used in chemical plants Spoil from mining of coal used for electricity production Production of Gold used in electronics for chemical plants Treatment of dross from electrolysis for Copper and Aluminum production Incineration of solid waste Fertilizer use for palm plantations Irrigation for palm plantations Sulfidic tailings for production of Copper used in chemical plants Spoil from mining of coal used for electricity production Agricultural land occupation Wood used for building material in chemical plants (<0.1 % of impacts for PKO-FA) 99+% from palm plantations Water depletion Aluminum powder (~50 %) N-olefin Chemical plant Electricity used during FA production Palm FFB growth Fuel combustion (process, transportation) Chemical plant Electricity used during FA production Particulate matter formation N-Olefin Ethylene Fuel combustion (process) Aluminum powder Electricity used during FA production Palm FFB growth Fuel combustion (process, transportation) Our findings must be interpreted in accordance with the scope of this study and the limitations due to the use of secondary data and assumptions. Further, this LCA study does not evaluate the implications of shifting to one particular feedstock, which could affect the inefficiencies and efficiencies of the individual systems. The overall larger systems to which each feedstock belongs should also be considered along with sustainability values of the specific stakeholders, the socio-economic relevance and other aspects not covered. Besides the feedstocks themselves being derived through multi-output processes, both petro-FA and PKO-FA are multi-output processes. Currently, the environmental impacts are allocated from the processes to the co-products. The changes in economics for the co-products through supply and demand dynamics will influence how the co-products are handled and, hence, the environmental impacts. Currently, there is increasing demand for the bio-derived products due to their perceived environmental benefits. The results show that the environmental impacts for PKO-FA strongly depend on palm plantations and palm oil mill operation practices. Hence, we recommend being mindful of the upstream practices specific to the suppliers when sourcing bio-derived materials. With the adoption of proper practices including decisions on land use changes, the bio-derived materials such as PKO provide a good environmentally friendly alternative to the non-renewable raw materials. While PKO and such bio-derived materials provide renewability in terms of carbon recycling and regenerating through cultivation, the responsibly produced bio-derived materials are limited by the availability of suitable land. Similar to the other renewable resources there are limits for environmentally responsible harvesting for PKO. The results of this LCA study show that petro-FA has a better average life cycle environmental performance than PKO-FA for the majority of environmental impact categories we investigated. This highlights that environmentally responsible sourcing should require rigorous testing of the assumption of "automatic environmental benefits" for bio-derived raw materials. Also, the intrinsic sustainability values of the stakeholders based on the respective local environmental profiles would be critical in incorporating the trade-offs into decision making. Footnotes 1. The feedstock for oleo-FA can be either PKO or coconut oil. For this study, the scope has been restricted to PKO feedstocks. 2. While Rhodium catalyst are used for ~75 % of hydroformylation processes [21], Cobalt catalysts are used for hydroformylation of higher olefins for detergent end-use alcohols [21, 22]. 3. BP exited the business [25]. 4. When the function of the land before LUC still needs to be met, a LUC might lead to additional LUC due to shifting of the current function to another location. Such additional LUC is classified as indirect LUC. 5. While these results could be interpreted as a need for a policy of sourcing from no land use change PKO so as to allow sourcing FA with lower footprint, the authors of this study would like to caution against such interpretation based on these results. This study does not account for indirect LUC, which could be happening. Also, such policy might only increase demand for such non-LUC-PKO; thereby, shifting land use change to other human activities. Notes Compliance with ethical standards Funding This study was funded in its entirety by Air Products and Chemicals, Inc. The third party critical review by Intertek was funded by Air Products and Chemicals, Inc. Supplementary material 11743_2016_1867_MOESM1_ESM.docx (76 kb) Supplementary material 1 (DOCX 75 kb) References 1. US Environmental Protection Agency. Nonylphenol and nonylphenol ethoxylates. http://www.epa.gov/oppt/existingchemicals/pubs/actionplans/np-npe.html. Accessed Sept 2014 2. Sasol North America. Alcohol ethoxylates—versatile alternatives for technical applications. http://www.sasoltechdata.com/MarketingBrochures/Alcohol_Ethoxylates.pdf. Accessed Sep 2014 3. US Environmental Protection Agency. Nonylphenol (NP) and nonylphenol ethoxylates (NPEs) action plan [RIN 2070-ZA09]. http://www.epa.gov/oppt/existingchemicals/pubs/actionplans/RIN2070-ZA09_NP-NPEs%20Action%20Plan_Final_2010-08-09.pdf. Accessed Sept 2014 4. Zah R, Hischier R (2007) Life cycle inventories of detergents. Ecoinvent Report No. 12. Swiss Centre for Life Cycle Inventories, Dübendorf, CHGoogle Scholar 5. Endler E, Barnes J, White B. Sustainability of alcohol-derived surfactants: manufacturing and supply perspective. http://www.shell.com/content/dam/shell/static/chemicals/downloads/products-services/endler-cesio-2008poster.pdf. Accessed Oct 2014 6. Stalmans M, Berenbold H, Berna JL, Cavalli L, Dillarstone A, Pranke M, Hirsinger F, Janzen D, Kosswig K, Postlethwaite D, Rappert T, Renta C, Scharer D, Schick KP, Schul W, Thomas H, Van Sloten R (1995) European life-cycle inventory for detergent surfactants production. Tenside Surfactants Deterg 32(2):84–109Google Scholar 7. Schul W, Hirsinger F, Schick KP (1995) A life-cycle inventory for the production of detergent range alcohol ethoxylates in Europe. Tenside Surfactants Deterg 32(2):171–192Google Scholar 8. Pittinger CA, Sellers JS, Janzen DC, Koch DG, Rothgeb TM, Hunnicutt ML (1993) Environmental life-cycle inventory of detergent-grade surfactant sourcing and production. J Am Oil Chem Soc 70(1):1–15. doi: 10.1007/BF02545360 CrossRefGoogle Scholar 9. Kapur A, Baldwin C, Swanson M, Wilberforce N, McClenchan G, Rentschler M (2012) Comparative life cycle assessment of conventional and green seal-compliant industrial and institutional cleaning products. Int J Life Cycle Assess 17(4):377–387. doi: 10.1007/s11367-011-0373-8 CrossRefGoogle Scholar 10. Stichnothe H, Schuchardt F (2011) Life cycle assessment of two palm oil production systems. Biomass Bioenergy 35:3976–3984. doi: 10.1016/j.biombioe.2011.06.001 CrossRefGoogle Scholar 11. Jungbluth N, Chudacoff M, Duariat A, Dinkel F, Doka G, Faist Emmenegger M, Gnansounou E, Kljun N, Schleiss K, Spielmann M, Stettler, C, Sutter J (2007) Life cycle inventories of bioenergy. Ecoinvent report No. 17, Swiss Centre for Life Cycle Inventories, Dübendorf, CHGoogle Scholar 12. Vijaya S, Choo YM, Halimah M, Zulkifli H, Tan YA, Puah CW (2010) Life cycle assessment of the production of crude palm oil (Part 3). J Oil Palm Res 22:895–903Google Scholar 13. Puah CW, Choo YM, Ong SH (2013) Production of palm oil with methane avoidance at palm oil mill: a case study of cradle-to-gate life cycle assessment. Am J Appl Sci 10(11):1351–1355. doi: 10.3844/ajassp.2013.1351.1355 CrossRefGoogle Scholar 14. International Organization for Standardization (ISO). Environmental management—life cycle assessment. European standard EN ISO 14040: Geneva, SwitzerlandGoogle Scholar 15. Rupilius W, Ahmad S (2013) The changing world of oleochemicals. Palm Oil Dev 44: 15–28. http://palmoilis.mpob.gov.my/publications/POD/pod44-wolfgang.pdf 16. Noweck K, Grafahrend W (2012) Fatty alcohols. In: Ullmann's encyclopedia of industrial chemistry, vol 14. Wiley-VCH, Weinheim. doi: 10.1002/14356007.a10_277.pub2 17. Deru M, Torcellini P (2007) Source energy and emission factors for energy use in buildings. National Renewable Energy Laboratory. NREL/TP-550-38617. http://www.nrel.gov/docs/fy07osti/38617.pdf. Accessed Jan 2015 18. Senyana LN (2011) Environmental impact comparison of distributed and centralized manufacturing scenarios. MS thesis. Rochester Institute of Technology. https://ritdml.rit.edu/bitstream/handle/1850/14566/LSenyanaThesis11-2011.pdf. Accessed Jan 2015 19. Alibaba. Industrial metal powder aluminum powder production line. http://www.alibaba.com/product-detail/Industrial-Metal-Powder-Aluminum-Powder-Production_447115456.html. Accessed Jan 2015 20. Serresa N, Tidua D, Sankareb S, Hlawkaa F (2011) Environmental comparison of MESO-CLAD® process and conventional machining implementing life cycle assessment. J Clean Prod 19(2011):1117–1124. doi: 10.1016/j.jclepro.2010.12.010 CrossRefGoogle Scholar 21. LSU Chemistry Homepage. Hydroformylation (Oxo) catalysts. http://chem-faculty.lsu.edu/stanley/webpub/4571-Notes/chap16-Hydroformylation.docx. Accessed Jan 2015 22. Lloyd L (2011) Handbook of industrial catalyst. In: Twigg MV, Spencer MS (eds) Fundamental and applied catalysis. Springer, New York. doi: 10.1007/978-0-397-49962-8 Google Scholar 23. Gankin VY, Gurevich GS. Chemical technology of oxosynthesis. Institute of Theoretical Chemistry: Editorial Board 'Khimiya' ('Chemistry'), St. Petersburg. http://en.itchem.ru/d/216737/d/chemical_technology_of_oxosyntheisi.pdf. Accessed Jan 2015 24. Karsa DR (ed) (1999) Industrial applications of surfactants IV. Royal Society of Chemistry, CambridgeGoogle Scholar 25. ICIS Chemical Business. Fatty Alcohols market reels from oversupply, weak demand. http://www.icis.com/resources/news/2002/10/28/183696/fatty-alcohols-market-reels-from-oversupply-weak-demand/. Accessed Jan 2015 26. Noweck K (2011) Production, technologies and applications of fatty alcohols. Lecture at the 4th workshop on fats and oils as renewable feedstock for the chemical industry. http://noweck.com/vortrag_fatty_alcohols_karlsruhe_2011.pdf. Accessed Jan 2015 27. SpecialChem. BASF increases capacity for higher oxo alcohols and plasticizers in Ludwigshafen. http://polymer-additives.specialchem.com/news/industry-news/basf-increases-capacity-for-higher-oxo-alcohols-and-plasticizers-in-ludwigshafen. Accessed Jan 2015 28. Sasol Media Center. Sasol's R1 billion world scale alcohol plant set to increase profitability. http://www.sasol.com/media-centre/media-releases/sasols-r1-billion-world-scale-alcohol-plant-set-increase-profitability. Accessed Jan 2015 29. Reuters. UPDATE 1-Sasol starts production at China chemical plant. http://www.reuters.com/article/2008/05/28/sasol-idUSL2830282820080528. Accessed Jan 2015 30. Weir HM (1945) The OXO process for alcohol manufacture from olefin. T.A.C. Report A1ML-1. http://www.fischer-tropsch.org/Bureau_of_Mines/reports/a1ml.htm. Accessed Mar 2014 31. US EPA HPV Challenge Program Submission. Alkenes, C6-C10, hydroformylation products, high-boiling. CAS Number 68526-82-9. http://www.epa.gov/hpv/pubs/summaries/alkc610h/c15013tp.pdf. Accessed Jan 2015 32. Saleh RY, Soled SL, Miseo S,Woo HS (2007) Hydrogenation of oxo aldehydes to oxo alcohols in the presence of a nickel-molybdenum catalyst. US Patent 7,232,934 B2Google Scholar 33. Koumpouras G (2012) Oxo alcohols PERP 2011–12. ChemSystems PERP Program. NexantGoogle Scholar 34. Kennel WE (1956) Production of alcohols of high purity by improved oxo process. US Patent 2,744,939 AGoogle Scholar 35. Tan YA, Halimah M, Zulkifli H, Vijaya S, Puah CW, Chong CL, Ma AN, Choo YM (2010) Life cycle assessment of refined palm oil production and fractionation (Part 4). J Oil Palm Res 22:193–926Google Scholar 36. Associação Brasileira de Química. Oleochemicals from palm kernel oil. http://www.abq.org.br/workshop/11/ADRIANO-SALES-%20FIRJAM_Oleochemicals-from-Palm-Kernel-Oil.pdf. Accessed Jan 2015 37. Ycharts. Malaysia palm kernel oil price historical data. http://ycharts.com/indicators/palm_kernel_oil_price. Accessed Jan 2015 38. Ycharts. Malaysia palm oil price historical data. http://ycharts.com/indicators/palm_oil_price. Accessed Jan 2015 39. Abdullah N, Sulaiman F (2013) The oil palm wastes in Malaysi. In: Matovic MD (ed) Biomass now—sustainable growth and use. ISBN: 978-953-51-1105-4, InTech. doi: 10.5772/55302. http://www.intechopen.com/books/biomass-now-sustainable-growth-and-use/the-oil-palm-wastes-in-malaysia. Accessed Mar 2014 40. University of Colorado Boulder. Methane leaks from palm oil wastewater are a climate concern, CU-Boulder study says. http://www.colorado.edu/news/releases/2014/02/27/methane-leaks-palm-oil-wastewater-are-climate-concern-cu-boulder-study-says. Accessed May 2014 41. Achten WMJ, Vandenbempt P, Almeida J, Mathijs E, Muys B (2010) Life cycle assessment of a palm oil system with simultaneous production of biodiesel and cooking oil in Cameroon. Environ Sci Technol 44(12):4809–4815. doi: 10.1021/es100067p CrossRefGoogle Scholar 42. Chiew YL, Shimada S (2013) Current state and environmental impact assessment for utilizing oil palm empty fruit bunches for fuel, fiber and fertilizer—a case study of Malaysia. Biomass Bioenergy 51:109–124. doi: 10.1016/j.biombioe.2013.01.012 CrossRefGoogle Scholar 43. Wicke B, Sikkema R, Dornburg V, Junginger M, Faaij A (2008) Drivers of land use change and the role of palm oil production in Indonesia and Malaysia. ISBN 978-90-8672-032-3. https://np-net.pbworks.com/f/Wicke,+Faaij+et+al+%282008%29+Palm+oil+and+land+use+change+in+Indonesia+and+Malaysia,+Copernicus+Institute.pdf. Accessed Mar 2014 44. Chiew YL, Iwata T, Shimada S (2011) System analysis for effective use of palm oil waste as energy resources. Biomass Bioenergy 35(7):2925–2935. doi: 10.1016/j.biombioe.2011.03.027 CrossRefGoogle Scholar 45. Reijnders L, Huijbregts MAJ (2008) Palm oil and the emission of carbon-based greenhouse gases. J Clean Prod 16:477–482. doi: 10.1016/j.jclepro.2006.07.054 CrossRefGoogle Scholar 46. Silalertruksa T, Bonnet S, Gheewala SH (2012) Life cycle costing and externalities of palm oil biodiesel in Thailand. J Clean Prod 28(2012):225–232. doi: 10.1016/j.jclepro.2011.07.022 CrossRefGoogle Scholar 47. Rabumi W (1998) Chemical composition of oil palm empty fruit bunch and its decomposition in the field. Thesis. Universiti Putra Malaysia. http://psasir.upm.edu.my/10413/1/FP_1998_3_A.pdf. Accessed Mar 2014 48. BW Plantation. Optimizing the use of oil palm by-product (EFB) as fertilizer supplement for oil palm http://www.bwplantation.com/document/Optimizing%20The%20Use%20of%20Empty%20Fruit%20Bunch%20%28EFB%29.pdf. Accessed May 2014 49. Menon NR, Rahman Z, Bakar N (2003) Empty fruit bunches evaluation: mulch in plantation vs. fuel for electricity generation. Oil Palm Ind Econ J 3(2):15–20. http://palmoilis.mpob.gov.my/publications/OPIEJ/opiejv3n2-15.pdf. Accessed Mar 2014 50. Hansen SB, Olsen SI, Ujang Z (2012) Greenhouse gas reductions through enhanced use of residues in the life cycle of Malaysian palm oil derived biodiesel. Bioresour Technol 104:358–366. doi: 10.1016/j.biortech.2011.10.069 CrossRefGoogle Scholar 51. Wicke B, Dornburg V, Junginger M, Faaij A (2008) Different palm oil production systems for energy purposes and their greenhouse gas implications. Biomass Bioenergy 32:1322–1337. doi: 10.1016/j.biombioe.2008.04.001 CrossRefGoogle Scholar 52. Schmidt JH, Dalgaard R (2009) Life cycle assessment of Malaysian palm oil—improvement options and comparison with European rapeseed oil. 2009 International palm oil life cycle assessment conference, Kuala LumpurGoogle Scholar Copyright information © The Author(s) 2016 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Authors and Affiliations Jignesh Shah1Email authorErdem Arslan1John Cirucci2Julie O'Brien1Dave Moss31.Air ProductsAllentownUSA2.Spatial AnalyticsLehigh ValleyUSA3.Air Products Performance Materials DivisionAllentownUSA
This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs11743-016-1867-y.pdf
Jignesh Shah, Erdem Arslan, John Cirucci, Julie O'Brien, Dave Moss. Comparison of Oleo- vs Petro-Sourcing of Fatty Alcohols via Cradle-to-Gate Life Cycle Assessment, Journal of Surfactants and Detergents, 2016, 1333-1351, DOI: 10.1007/s11743-016-1867-y | CommonCrawl |
\begin{definition}[Definition:Substitution (Formal Systems)/Letter]
Let $\FF$ be a formal language with alphabet $\AA$.
Let $\mathbf B$ be a well-formed formula of $\FF$.
Let $p$ be a letter of $\FF$.
Let $\mathbf A$ be another well-formed formula.
Then the '''substitution of $\mathbf A$ for $p$ in $\mathbf B$''' is the collation resulting from $\mathbf B$ by replacing all occurrences of $p$ in $\mathbf B$ by $\mathbf A$.
It is denoted as $\map {\mathbf B} {\mathbf A \mathbin {//} p}$.
Note that it is not immediate that $\map {\mathbf B} {\mathbf A \mathbin {//} p}$ is a well-formed formula of $\FF$.
This is either accepted as an axiom or proven as a theorem about the formal language $\FF$.
\end{definition} | ProofWiki |
Kim*: Development of Flash Memory Page Management Techniques
Jeong-Joon Kim*
Development of Flash Memory Page Management Techniques
Abstract: Many studies on flash memory-based buffer replacement algorithms that consider the characteristics of flash memory have recently been developed. Conventional flash memory-based buffer replacement algorithms have the disadvantage that the operation speed slows down, because only the reference is checked when selecting a replacement target page and either the reference count is not considered, or when the reference time is considered, the elapsed time is considered. Therefore, this paper seeks to solve the problem of conventional flash memory-based buffer replacement algorithm by dividing pages into groups and considering the reference frequency and reference time when selecting the replacement target page. In addition, because flash memory has a limited lifespan, candidates for replacement pages are selected based on the number of deletions.
Keywords: Flash Memory , Page Replacement Algorithm , SSD
Flash memory has distinct features from hard disks. Since the speed of read and write operations differ in flash memory and overwrite is impossible, a deletion operation is added to solve this problem. In addition, since flash memory has a limited number of erasure times, "Wear-Leveling" has to be considered [1,2].
"Buffer" refers to the area in which a part of the disk block is stored in the main memory to improve the system's performance by reducing the disk's number of I/O operations. The buffer is used to hold the data of frequently accessed storage devices. If the buffer has no free space, the replacement target page can be selected through the buffer replacement algorithm [3].
The buffer replacement algorithm for the hard disk tries to maintain the high buffer hit ratio because the read and write operations have the same speed. However, since the buffer replacement algorithm for flash memory has different speeds for read and write operations, both write operations and the Hit Ratio must be considered. Thus, applying a buffer replacement algorithm for transferring from a hard disk to flash memory is an undesirable method. In addition, the buffer replacement algorithm for existing flash memory is selected based on whether the page is clean, reference, and the reference time information indicating the page's recency [4-7]. However, this information alone can result in the wrong selection of a replacement page, and there is the possibility that the Hit Ratio will be lowered [8-10].
Therefore, when selecting a replacement target page, unlike in the existing algorithm that checks whether a page is clean or dirty, the reference time information, and whether the page is referenced in binary form, we propose an algorithm that considers the number of deletion operations, which is a feature of flash memory, besides the number of references, which will take account of the flash memory's hardware properties more than existing algorithms.
2. Related Research
The buffer cache stores a portion of the entire disk block to reduce physical I/O requests. Since the buffer cache is relatively small compared to the entire disk, data is frequently replaced; a buffer replacement algorithm is required to efficiently utilize this.
Since the buffer replacement algorithm based on using a hard disk as a conventional storage device has the same read/write operation speed as the hard disk, it can be said that the higher the buffer "Hit Ratio" that is maintained, the better the performance buffer. Therefore, many hard disk-based buffer replacement algorithms that change page based on the Recency or Frequency of page references in the buffer have been proposed.
However, since the write operation speed of the flash memory is 10 times slower than that of the read operation, replacing the clean page and inducing the read operation by leaving the data unchanged in the buffer costs less than replacing the dirty page inducing the write operation by changing the data in the buffer. Therefore, buffer replacement algorithms that consider the write cost are proposed for flash memory.
2.1 CLRU
The CLRU distinguishes the page from the clean page list and the dirty page list, and divides the page into four types based on the reference time to select the replacement target page. Fig. 1 is an example of selecting the replacement target page of CLRU [11].
Example of CLRU.
As shown in Fig. 1, the CLRU is divided into a clean page list and a hard page list. Each list is divided into a Cold Area and a Hot Area, and the size of the Cold Area can be represented by a normalized expression that considers the reference time.
[TeX:] $$N V E T _ { i } ^ { \prime } = \frac { N V E T _ { i } - m i n _ { j \in m } ^ { m } \left( N V E T _ { j } \right) } { \max _ { j \in m } ^ { m } \left( N V E T _ { j } \right) - \min _ { j \in m } ^ { m } \left( N V E T _ { j } \right) }$$
As shown in Eq. (1), NVETi represents the elapsed time of page i, NVET'i is the normalized value of the elapsed time for page i, and m represents the total number of pages in the page list. If NVETi is less than the average value for NVETi, the page is included as a hot page in the hot area of the page list, and if NVETi is greater than or equal to the average, the page is included as a cold page in the cold section of the page list.
CLRU selects the page with the lowest access frequency in the Cold Area of the clean page list as the replacement target page. If there are no pages in the Cold Area of the clean page list, the page with the lowest access frequency in the Cold Area of the dirty page list is selected. If there are no Cold Area pages in the dirty page list, the page with the lowest access frequency in the Hot Area of the clean page list is selected as the replacement target, and if there is no Hot Area page in the clean page list, the Hot Area of the dirty page list is selected as the replacement target page.
CLRU reduces the write operation of the flash memory while delaying the replacement of cold dirty pages rather than the cold clean page, and increases the accuracy rate of pages by replacing cold pages ahead of hot pages. However, when selecting a page to replace, the page only considers the time referenced to the processor; the number of references is not considered. Therefore, if the number of references is taken into account when selecting the page to be replaced, an appropriate replacement target page can be selected based on more information.
2.2 HDC
In addition to the write operation delay that the existing flash memory based buffer replacement algorithm uses to select the replacement page, HDC considers distinguishing the buffer cache from the clean and dirty page list and the largest weight value of the page assigned for use in the sub-paging technique when selecting the replacement target page [12]. Fig. 2 shows the sub-paging technique.
Sub paging method
The conventional flash memory-based buffer replacement algorithm causes a write operation on the page when a dirty page replacement occurs. At this time, the size of one page of memory buffer cache is 4 kB and the size of one page of flash memory is 512 B. Therefore, eight write operations occur in total when a dirty replacement page is selected in the buffer. However, recent research on the fact that the page in the buffer contains clean data in the dirty replacement page has been carried out, and some of the eight write operations include read operations caused by clean data. Therefore, write operations on some clean data must be removed, since they cause more write operations than are actually done. HDC proposed a sub-paging technique to address this problem. As shown in Fig. 2, the sub-paging technique is proposed to divide pages in a buffer into flash memory units to prevent excessive write operations. The page P1 in the buffer has already generated eight write operations. However, if the sub-paging technique is used, P1 is divided into eight sub-pages, and then whether the divided sub-pages are clean or dirty is checked. P1 generates six write operations in total because there are two clean sub-pages and six dirty sub-pages.
Example of HDC.
Fig. 3 shows an example of selecting the replacement target page from HDC. The HDC is divided into two lists, clean pages and dirty pages. The weight value is imposed based on CostofWriting, which is the ratio of dirty subpages for each page in the list, HotDegree, which is the time the page was referenced to the processor, and λ, which represents the ratio of read and write operation rates. The weight value RI(P) can be expressed by the following equation
[TeX:] $$R I ( \rho ) = \lambda \times \text { HotDegree } ( \rho ) + ( 1 - \lambda ) \times \text { Costof Writing } ( \rho )$$
The clean page and dirty page lists are sorted in ascending order based on their weight value. The page with the largest weight value in each list is selected as the replacement candidate page candidate and the page that has the smallest weight value among the candidates is replaced with the replacement target page.
The HDC selects replacement pages considering the reference time considered by the existing flash memory algorithm, the ratio of the dirty subpages using the sub-paging technique, and the ratio between the flash memory's read and write operation speeds. This has the advantage that it reduces the write operation for the flash memory, but there is the disadvantage that the limited number of erase operations in the flash memory is not considered.
3. Page Replacement Algorithm
3.1 Hot and Cold Classification Algorithm
The Hot and Cold Classification Algorithm is an algorithm that determines whether a page in a buffer is hot or cold.
The existing algorithm only considers the reference time in hot page and cold page discrimination, but in the Hot and Cold Classification Algorithm proposed in this paper, the reference time and reference frequency are considered together. Fig. 4 shows an example that considers the reference time.
The existing research contains an example that considers the reference time.
Previous research studies have shown the normalized value of each page by dividing the difference between the elapsed time value of an arbitrary page and the maximum elapsed time value of pages in the buffer into the difference between the maximum elapsed time and the minimum elapsed time of pages in the buffer based on the elapsed time value, which is the difference between the last time the page was referenced and the current time. For example, the elapsed time value of p2 in Fig. 4 is 20, the maximum value of the elapsed time value in the buffer is 50, and its minimum value is 10. Therefore, the normalized value of p2 is obtained as (20 – 10) / (50 – 10) = 1/4 = 0.25, and normalization values in all buffers can be derived.
This paper considers the proposed reference time.
However, this paper divides the difference between the values of arbitrary pages and the maximum values of pages in the buffer by the difference between the maximum and minimum values. At this time, the value of an arbitrary page is normalized based on the last reference time value. For example, in Fig. 5, the normalized value of p2 is 3/4 = 0.75, which is attained by dividing the difference between 12:40 and 12:10 into the difference between the maximum value of 12:50 in the buffer and the minimum value of 12:10. This reduces the computation cost by deriving the normalized value that only considers the last reference time, compared to the existing related method that considers the reference time as the elapsed time difference between the current time and the last reference time.
Next, Fig. 6 shows the consideration of the number of references.
Example considering reference operations.
As Fig. 6 shows, reference counts are normalized in the same way as reference times, and the number of references is considered. For example, the normalization value of p2 is calculated by dividing the difference between the reference count 7 of p2 and the minimum reference count 2 of the pages in the buffer into the difference between the maximum reference count of eight pages and the minimum reference count of 2 in the buffer. In this manner, the normalized value of the reference count can be derived for every page in the buffer.
We can express the proposed method of this paper in the equations, which considers the normalized value of the reference time and the reference frequency of a specific page, using the reference frequency and reference time and the terminology used for describing this as shown in Table 1.
Terminology to describe the hot/cold segmentation algorithm
Terms Explanation
pi A specific page
p = {p1,p2,...,pn} The entire set of pages in the buffer list
tpi The last reference time for a specific page pi
tp {tp1,tp2,...,tpn}
Tpt The reference time for a particular page, pi normalized value
Cpi Number of referrals for a specific page pi
Cp {Cp1,Cp2,...,Cpn}
Cpi The normalized values refer to a specific page number of pi
Based on terminology used in Table 1, the formula for obtaining the normalized value of the reference time of a specific page can be expressed as Eq. (3).
[TeX:] $$T _ { p _ { i } } = \frac { t _ { p _ { i } - \min \left( t _ { P } \right) } } { \max \left( t _ { P } \right) - \min \left( t _ { P } \right) }$$
Next, the formula for calculating the normalized value of a specific page's reference count can be expressed as Eq. (4).
[TeX:] $$C _ { p _ { i } } = \frac { c _ { P _ { i } - \min \left( C _ { P } \right) } } { \max \left( C _ { P } \right) - \min \left( C _ { P } \right) }$$
Next, we introduce a method that considers the reference time and count together in Fig. 7.
An example in which reference times and reference times are considered together.
Hot and Cold pages are identified by assigning weights ω and 1- ω to the normalized value Tpi of the reference count and the normalized value Cpi of the reference time of the specific page Pi in the buffer, respectively. The weight ω is calculated by assigning a weight value to the reference time and reference frequency according to the importance set by the user. ω denotes a weight value for the reference time and has a value in the range 0–1; 1- ω denotes a weight value for reference count and its value depends on ω.
The formula showing the Hot and Cold Classification Algorithm that considering the weight values can be expressed as Eq. (5).
[TeX:] $$\omega \times T _ { p _ { i } } + ( 1 - \omega ) \times C _ { p _ { i } }$$
As shown in Fig. 7, if the normalized value Tpi of the reference time of p1 is 1, the normalized value Cpi of the reference number is 0 and if the normalized value Tpi of the reference time of p3 is 0.5, the normalized value of reference number Cpi is 1. The ω value is given the same weight value of 0.5 for the reference frequency and reference time. Eq. (3), p1 has 0.5 and p3 has 0.75.
When the Hot and Cold Classification Algorithm of the pages is equal to Eq. (5), the average value of each page in the buffer calculated using Eq. (5) is represented as Avg, and the [TeX:] $$\omega \times T _ { p _ { i } } + ( 1 - \omega ) \times C _ { p _ { i } }$$ value of each page and Avg compare the values. If the value of [TeX:] $$\omega \times T _ { p _ { i } } + ( 1 - \omega ) \times C _ { p _ { i } }$$ is larger than that Avg for a certain page, it is regarded as a hot page, and if [TeX:] $$\omega \times T _ { p _ { i } } + ( 1 - \omega ) \times C _ { p _ { i } }$$ is smaller than Avg, it is regarded as a cold page.
Based on this, it can be shown how an arbitrary page can be regarded as a cold page (Eq. (6)) or hot page (Eq. (7)).
[TeX:] $$\omega \times T _ { p _ { i } } + ( 1 - \omega ) \times C _ { p _ { i } } \leq \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \omega \times T _ { p _ { i } } + ( 1 - \omega ) \times C _ { p _ { i } }$$
This expression is a cold page, and Avg can be expressed as [TeX:] $$\frac { 1 } { n } \sum _ { i = 1 } ^ { n } \omega \times T _ { p _ { i } } + ( 1 - \omega ) \times C _ { p _ { i } }.$$
[TeX:] $$\omega \times T _ { p _ { i } } + ( 1 - \omega ) \times C _ { p _ { i } } > \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \omega \times T _ { p _ { i } } + ( 1 - \omega ) \times C _ { p _ { i } }$$
As shown in Fig. 7, the value of [TeX:] $$0.5 \times T _ { p _ { i } } \times 0.5 \times C _ { p _ { i } }$$ in p1 is 0.5 when Avg is 0.549, which is the average value of pages in the buffer, calculated using (7). Therefore, p1 is a cold page because it has a value smaller than Avg. The value of [TeX:] $$0.5 \times T _ { p _ { i } } \times 0.5 \times C _ { p _ { i } }$$ for p3 is 0.75, making p3 it has a value larger than Avg. In this manner, it is possible to determine whether pages in the buffer are hot or cold.a hot page because
3.2 Algorithm
The mIBRA algorithm proposed in this paper consists of a clean page list and a dirty page list. Fig. 8 shows the structure of the buffer list for the IBRA algorithm.
Buffer list structure of the IBRA algorithm.
As shown in Fig. 8, the HC_Value, which is calculated by the Hot/Cold Detection algorithm to check whether a page is hot or cold, consists of a HC_Value with a calculated result, Partial, which can be known as partial or as a page of clean and dirty pages full partials, and EraseCount, which carries delete operation count information.
The replacement target page is selected in the order: Cold Clean Page, Cold Partial Dirty Page, Cold Full Dirty Page, Hot Clean Page, Hot Partial Dirty Page, and then Hot Full Dirty Page according to the priority of the replacement page selection proposed in this paper.
If there are many Cold Clean Pages in the buffer, replace the Cold Clean Page with the lowest HC_Value value first. If there are no Cold Clean Pages in the buffer, the next priority, Cold Partial Dirty Page, is selected for replacement. If there are many Cold Partial Dirty Pages in the buffer, the page with the smallest product of the HC_Value and the Partial is substituted first. If there are no Cold Partial Dirty Pages in the buffer, the next priority Cold Full Dirty Page is selected as the replacement target. If there are many Cold Full Dirty Pages in the buffer, replace the Cold Full Dirty Page with the one with the lowest HC_Value value.
If the buffer list is filled with pages as shown in Fig. 9, we check whether there is a Cold Clean Page according to the priority proposed in this paper. Currently, P1, P4, and P5 are Cold Clean Pages. The HC_Value is checked here. In the above situation, since the HC_Value of P5 is the smallest at 0.33, P5 is selected as the replacement target page.
Fig. 10 shows the EraseConsider algorithm for selecting replacement page candidates considering the deletion frequency.
As shown in Fig. 10, the deletion algorithm works as follows. First, the number of page deletions that exist in the buffer list is obtained, and an average value is assigned to Avg. Next, the number of deletion operations for pages in the buffer is compared with the Avg value, and pages smaller than the average are extracted as Pages and used to perform the SelectVictim algorithm. Fig. 11 shows the SelectVictim algorithm for selecting the pages to be replaced in the buffer.
Example of selecting the page to be exchanged for the IBRA algorithm.
Fig. 10.
Algorithm to consider deletion operation.
As shown in Fig. 11, the process of selecting the replacement object selection algorithm is as follows. First, if a Cold page exists in the buffer list, and if a Clean page exists in the CLlist, assign the page with the lowest Hot_Cold_Value among the Clean pages to Victim. If there is no page in the CLlist, assign the page that has the smallest value among the products of Partial_Value and Hot_Cold_Value among the pages whose Partial_Value for the dirty page is < 1 in the DRlist to Victim. If the Dirty page of the DRlist does not have a page with a Partial value less than 1, assign the page with the lowest Hot_Cold_Value among the pages whose Partial_Value for the dirty page is not < 1 to Victim.
However, if there is no Cold page in the buffer list, the page with the lowest Hot_Cold_Value of the Clean pages among the Hot pages, rather than the Cold page existing in the CLlist is assigned to Victim and the Victim is returned. If there is no page in the CLlist, assign the page that has the smallest value among the products of Partial_Value and Hot_Cold_Value among the pages whose Partial_Value for the dirty page is < 1 in the DRlist to Victim. If the Dirty page of the DRlist does not have a page with a Partial < 1, the page with the lowest Hot_Cold_Value among the pages whose Partial_Value of the dirty page is not < 1 is assigned to Victim and the Victim is returned.
Algorithm for page selection to be exchanged.
This paper evaluates the performance of the IBRA algorithm using Flash-DBSim [13], a simulation platform capable of evaluating flash memory-based algorithms. Flash-DBSim is often used to evaluate flash memory-based buffer replacement algorithms because users can specify flash memory specifications. Table 2 shows the specifications of the detailed flash memory that was set up to evaluate the performance of the IBRA algorithm this paper proposes.
We verified the efficiency of the IBRA algorithm presented in this paper by comparing the buffer replacement algorithms based on existing flash memory with AD-LRU, CLRU, and HDC. The traces are used to evaluate the performance of the IBRA algorithm and the characteristics of each trace are shown in Table 3.
Flash memory specifications
Item Value
Page size (byte) 4,096
Block size (page) 64
Page reading speed (μs/page) 25 (max)
Page writing speed (μs/page) 220
Block delete speed (ms/block) 1.5
Deletion threshold (durability) 100,000
Trace characteristics
Trace name Number of request Read/Write ratio (%) Reference pattern
Random Access 1,000,000 50/50 Uniform
Read-Most 1,000,000 90/10 Uniform
Write-Most 1,000,000 10/90 Uniform
The performance is compared by applying the 8 MB buffer size to the algorithm, and the buffer hit rate, number of write operations, and execution time are each compared in order.
4.1 Buffer Hit Rate
As shown in Fig. 12, the IBRA ratio is the highest in all traces T1, T2, and T3, and the IBRA hit ratio at T1 is 84%. The Hit Ratio at T2 is 61% and the Hit Ratio at T3 is 90. T3 has the highest hit rate because T3 slows the replacement of dirty pages by 10%/90% of the read/write ratio.
Trace stars existing algorithm and IBRA buffer hit ratio.
We verified the efficiency of the wavelet histogram generation system developed in this paper using the Page Traffic statistical information data (180 million) that was accumulated during the week of the Wikipedia Hits Log [12]. The Page Traffic statistical information data used the value of the language attribute from the visitor page among the four attributes (visitor page language, page name, number of page requests, and response contents size) as the input data for the performance evaluation. The size of the generated histogram (the number of coefficients included in the histogram) and the generation time of the wavelet histogram were measured.
4.2 Number of Write Operations
Trace the existing algorithms and IBRA write operations.
As shown in Fig. 13, the number of write operations in the trace T1 is 75,000 times, the number of write operations in T2 is 19,500 times, and the number of write operations in T3 is 80,000 times. T3 has a higher write operation rate than other traces because the read/write ratio is 10%/90%. However, since the algorithm proposed in this paper selects a candidate group with a small number of deletion operations and selects a replacement target page, the lowest number of write operations is shown in T3.
4.2 Run Time
Trace the existing algorithms and IBRA run time.
As shown in Fig. 14, the run time of IBRA in this paper is 1.5–2 times longer than other algorithms in all traces. Since the candidates are selected before the replacement page is selected in this paper, the run time is slightly increased. However, the write operation is about 10 times slower than the read operation in flash memory, and the erase operation is about 10 times slower than the write operation and about 100 times slower than the read operation. Therefore, in terms of overall flash memory management, it is more important to reduce the write and erase operations while increasing the buffer hit rate rather than the run time.
As we are entering the Big Data age in which the amount of data will increase exponentially, devices that can store data are also constantly evolving. Flash memory, which is a type of nonvolatile memory, has advantages of being faster and lighter than a hard disk, so they are being increasingly adopted as storage devices in various fields in recent years. The buffer is intended to store pages with a large number of references to reduce the speed difference between the CPU and the storage device. A buffer replacement algorithm has been proposed to improve the buffer's performance and an existing buffer replacement algorithm has been proposed based on a hard disk with the same operation speed. Therefore, since conventional algorithms are unsuitable for flash memories that have different operation speeds, many studies on buffer replacement algorithms that consider the characteristics of flash memories have been conducted recently.
Among the proposed buffer replacement algorithms that consider the characteristics of flash memory, AD-LRU considers the reference count, but since it is a binary type indicating re-referencing, it is difficult to indicate the accurate reference count of a page if it is referenced more than twice. Since the CLRU considers the elapsed time when distinguishing hot/cold, indicating the page's update pattern, it takes a long time to execute, and it has a disadvantage because HDC does not consider the reference count.
Therefore, this paper first divides the pages into six groups, classifies them in more detail than previous studies, and presents a Hot and Cold Classification Algorithm to consider reference times and reference times together. Furthermore, considering the limited lifetime of the flash memory, the candidates for replacement pages were selected based on the number of deletions.
Finally, we compared the proposed IBRA algorithm with AD-LRU, CLRU, and HDC. The IBRA algorithm showed the highest buffer hit rate, IBRA had the lowest number of write operations, and IBRA showed the third fastest execution time.
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2017R1A2B4011243).
Jeong-Joon Kim
He received his B.S. and M.S. in Computer Science at Konkuk University in 2003 and 2005, respectively. In 2010, he received his Ph.D. in at Konkuk University. He is currently a professor at the department of Computer Science at Korea Polytechnic University. His research interests include database systems, big data, semantic web, geographic information systems (GIS), and ubiquitous sensor network (USN), etc.
1 S. Ahn. S. Hyun. T. Kim, H. Bahn, "A compressed file system manager for flash memory based consumer electronics devices," Journal of IEEE Transactions on Consumer Electronics, 2013, vol. 59, no. 3, pp. 544-549. doi:[[[10.1109/TCE.2013.6626236]]]
2 H. Li, C. Yang, H. Tseng, "Energy-aware flash memory management in virtual memory system," Journal of IEEE Transactions on Very Large Scale Integration Systems, 2008, vol. 16, no. 8, pp. 952-964. doi:[[[10.1109/TVLSI.2008.2000517]]]
3 A. Dan, D. Towsley, "An approximate analysis of the LRU and FIFO buffer replacement schemes," in Proceedings of ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, Boulder, CO, 1990;pp. 143-152. doi:[[[10.1145/98457.98525]]]
4 S. Jiang, F. Chen, X. Zhang, "CLOCK-Pro: an effective improvement of the CLOCK replacement," in Proceeding of the USENIX Annual Technical Conference, Anaheim, CA, 2005;pp. 323-336. custom:[[[https://dl.acm.org/citation.cfm?id=1247395]]]
5 S. Jiang, X. Zhang, "LIRS: an efficient low inter-reference recency set replacement policy to improve buffer cache performance," in Proceeding of ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, Marina Del Rey, CA, 2002;pp. 31-42. doi:[[[10.1145/511334.511340]]]
6 T. Johnson, D. Shasha, "2Q: a low overhead high performance buffer management replacement algorithm," in Proceeding of the 20th International Conference on Very Large Data Bases, Santiago de Chile, Chile, 1994;pp. 439-450. custom:[[[https://dl.acm.org/citation.cfm?id=672996]]]
7 E. O'Neil, P. E. O'Neil, G. Weikum, "The LRU-K page replacement algorithm for database disk buffering," in Proceeding of ACM SIGMOD International Conference on Management of Data, Washington, DC, 1993;pp. 297-306. doi:[[[10.1145/170035.170081]]]
8 H. Jung, H. Shim, S. Park, S. Kang, J. Cha, "LRU-WSR: integration of LRU and writes sequence recording for flash memory," Journal of IEEE Transactions on Consumer Electronics, 2012, vol. 54, no. 3, pp. 1215-1223. doi:[[[10.1109/TCE.2008.4637609]]]
9 Z. Li, P . Jin, X. Su, K. Cui, L. Yue, "CCF-LRU: a new buffer replacement algorithm for flash memory," Journal of IEEE Transactions on Consumer Electronics, 2009, vol. 55, no. 3, pp. 1351-1359. doi:[[[10.1109/TCE.2009.5277999]]]
10 S. Y. Park, D. Jung, J. U. Kang, J. S. Kim, J. Lee, "CFLRU: a replacement algorithm for flash memory," in Proceeding of the International Conference on Compilers, Architecture and Synthesis for Embedded Systems, Seoul, Korea, 2006;pp. 234-241. doi:[[[10.1145/1176760.1176789]]]
11 G. Xu, F. Lin, Y. Xiao, "CLRU: a new page replacement algorithm for NAND flash-based consumer electronics," Journal of IEEE Transactions on Consumer Electronics, 2014, vol. 60, no. 1, pp. 38-44. doi:[[[10.1109/TCE.2014.6780923]]]
12 M. Lin, S. Chen, G. Wang, T. Wu, "HDC: an adaptive buffer replacement algorithm for NAND flash memory-based databases," Optik-International Journal for Light and Electron Optics, 2014, vol. 125, no. 3, pp. 1167-1173. doi:[[[10.1016/j.ijleo.2013.07.162]]]
13 X. Su, P . Jin, X. Xiang, K. Cui, L. Yue, "Flash-DBSim: a simulation tool for evaluating flash-based database algorithms," in Proceedings of International Conference on Computer Science and Information Technology, Beijing, China, 2009;pp. 185-189. doi:[[[10.1109/ICCSIT.2009.5234967]]]
Received: April 12 2017
Corresponding Author: Jeong-Joon Kim* ([email protected])
Jeong-Joon Kim*, Dept. of Computer Science and Engineering, Korea Polytechnic University, Siheung, Korea, [email protected] | CommonCrawl |
A star-polygon is drawn on a clock face by drawing a chord from each number to the fifth number counted clockwise from that number. That is, chords are drawn from 12 to 5, from 5 to 10, from 10 to 3, and so on, ending back at 12. What is the degree measure of the angle at each vertex in the star-polygon?
Consider the two chords with an endpoint at 5. The arc subtended by the angle determined by these chords extends from 10 to 12, so the degree measure of the arc is $(2/12)(360)=60$. By the Central Angle Theorem, the degree measure of this angle is $(1/2)(60)=30$. By symmetry, the degree measure of the angle at each vertex is $\boxed{30}$. | Math Dataset |
Statistical model of ligand substitution
Recently, I was told that in case of a particular step of a generic ligand substitution reaction:
$$\ce{M(OH2)_{$N - n$}L_{n} + L <=> M(OH2)_{$N - n - 1$}L_{$n + 1$} + H2O}$$
The probability of the forward reaction and by extension, the equilibrium constant of this step, $K_n$ would be proportional to
$$\frac{N - n}{n + 1}$$
by a purely statistical analysis. Now I have thought about this for quite a bit, but I can't understand the mathematical reasoning behind arriving at this expression. I suspect it has something to do with the numbers of the ligand being replaced and the ligand which is replacing the other one. Can anyone explain the process of arriving at this expression using simple (if possible) reasoning?
equilibrium coordination-compounds
Shoubhik R MaitiShoubhik R Maiti
$\begingroup$ I may add a more complete answer later, but for now, read this page, in particular example 2 and the remaining paragraphs in that section. chem.libretexts.org/Bookshelves/Inorganic_Chemistry/… $\endgroup$ – Tyberius Apr 14 '19 at 20:55
I think the answer may just come down to a simple counting of available sites. The equilibrium constant for a reaction is equal to the ratio of the forward and reverse reaction rates. For the forward reaction, there are $N-n$ sites available at which a ligand can replace an $\ce{H2O}$. Conversely, for the reverse reaction, there are $n+1$ ligand sites at which a water molecules can replace it. If we assume that in each case the reaction rate with $m$ sites available is equal to $m$ times the reaction rate with $1$ site available, we obtain an equilibrium constant that is proportional to the ratio of sites available for the forward and reverse reactions.
$$K=\frac{k_{f,N-n}}{k_{r,n+1}}\approx\frac{(N-n)k_{f,1}}{(n+1)k_{r,1}}\propto\frac{(N-n)}{(n+1)}$$
TyberiusTyberius
I think the easy way out is to invoke $S_\mathrm m = R \ln \Omega$. If we assume that for a generic complex $\ce{MA_{n}B_{$N-n$}}$,
$$\Omega = {N \choose n} = \frac{N!}{n!(N-n)!} \quad \left[ = {N \choose N-n} \right]$$
and that for the individual molecules $\ce{A}$ and $\ce{B}$, $\Omega = 1$, then the equilibrium constant $K$ for
$$\ce{MA_{n}B_{$N-n$} + A <=> MA_{n + 1}B_{$N-n-1$} + B}$$
is given by
$$\begin{align} K &= \exp\left(\frac{-\Delta_\mathrm r G}{RT}\right) \\ &= \exp\left(\frac{\Delta_\mathrm r S}{R}\right) \\ &= \exp\left(\frac{S_\mathrm{m}(\ce{MA_{n + 1}B_{$N-n-1$}}) + S_\mathrm{m}(\ce{B}) - S_\mathrm{m}(\ce{MA_{n}B_{$N-n$}}) - S_\mathrm{m}(\ce{A})}{R}\right) \\ &= \exp[\ln\Omega(\ce{MA_{n + 1}B_{$N-n-1$}}) + \ln\Omega(\ce{B}) - \ln\Omega(\ce{MA_{n}B_{$N-n$}}) - \ln\Omega(\ce{A})] \\ &= \exp\left[\ln\left(\frac{\Omega(\ce{MA_{n + 1}B_{$N-n-1$}})\Omega(\ce{B})}{\Omega(\ce{MA_{n}B_{$N-n$}})\Omega(\ce{A})}\right)\right] \\ &= \frac{\Omega(\ce{MA_{n + 1}B_{$N-n-1$}})\Omega(\ce{B})}{\Omega(\ce{MA_{n}B_{$N-n$}})\Omega(\ce{A})} \\ &= \frac{N!}{(n+1)!(N-n-1)!} \cdot \frac{n!(N-n)!}{N!} \\ &= \frac{N-n}{n+1} \end{align}$$
The reason for ignoring $\Delta_\mathrm r H$ is because we are only interested in statistical effects, i.e. entropy, and we don't care about the actual stability of the complex or the strength of the M–L bonds. However, the exact justification for assuming this form for $\Omega$ still eludes me. It makes intuitive sense (that there are $N!/(n!(N-n)!)$ ways to arrange $n$ different ligands in $N$ different coordination sites), but I can't convince myself (and don't want to attempt to convince you) that it's entirely rigorous. In particular, I feel like symmetry should play a role here; maybe it is simply that the effects of any symmetry eventually cancel out.
Tyberius
orthocresol♦orthocresol
Not the answer you're looking for? Browse other questions tagged equilibrium coordination-compounds or ask your own question.
Why do statistical effects impact stepwise stability constants?
Why is ligand substitution only partial with copper(II) ions and ammonia?
Why is thiourea a monodentate ligand?
Why does ligand substitution occur when OH- is a better ligand than NH3?
Is there a difference between a chelate ligand and a polydentate ligand?
Equilibrium Constant & Rate-Determining Step
Ligand substitution reaction in coordination compounds | CommonCrawl |
Stable map
In mathematics, specifically in symplectic topology and algebraic geometry, one can construct the moduli space of stable maps, satisfying specified conditions, from Riemann surfaces into a given symplectic manifold. This moduli space is the essence of the Gromov–Witten invariants, which find application in enumerative geometry and type IIA string theory. The idea of stable map was proposed by Maxim Kontsevich around 1992 and published in Kontsevich (1995).
Because the construction is lengthy and difficult, it is carried out here rather than in the Gromov–Witten invariants article itself.
The moduli space of smooth pseudoholomorphic curves
Fix a closed symplectic manifold $X$ with symplectic form $\omega $. Let $g$ and $n$ be natural numbers (including zero) and $A$ a two-dimensional homology class in $X$. Then one may consider the set of pseudoholomorphic curves
$((C,j),f,(x_{1},\ldots ,x_{n}))\,$
where $(C,j)$ is a smooth, closed Riemann surface of genus $g$ with $n$ marked points $x_{1},\ldots ,x_{n}$, and
$f:C\to X\,$
is a function satisfying, for some choice of $\omega $-tame almost complex structure $J$ and inhomogeneous term $\nu $, the perturbed Cauchy–Riemann equation
${\bar {\partial }}_{j,J}f:={\frac {1}{2}}(df+J\circ df\circ j)=\nu .$
Typically one admits only those $g$ and $n$ that make the punctured Euler characteristic $2-2g-n$ of $C$ negative; then the domain is stable, meaning that there are only finitely many holomorphic automorphisms of $C$ that preserve the marked points.
The operator ${\bar {\partial }}_{j,J}$ is elliptic and thus Fredholm. After significant analytical argument (completing in a suitable Sobolev norm, applying the implicit function theorem and Sard's theorem for Banach manifolds, and using elliptic regularity to recover smoothness) one can show that, for a generic choice of $\omega $-tame $J$ and perturbation $\nu $, the set of $(j,J,\nu )$-holomorphic curves of genus $g$ with $n$ marked points that represent the class $A$ forms a smooth, oriented orbifold
$M_{g,n}^{J,\nu }(X,A)$
of dimension given by the Atiyah-Singer index theorem,
$d:=\dim _{\mathbb {R} }M_{g,n}(X,A)=2c_{1}^{X}(A)+(\dim _{\mathbb {R} }X-6)(1-g)+2n.$
The stable map compactification
This moduli space of maps is not compact, because a sequence of curves can degenerate to a singular curve, which is not in the moduli space as we've defined it. This happens, for example, when the energy of $f$ (meaning the L2-norm of the derivative) concentrates at some point on the domain. One can capture the energy by rescaling the map around the concentration point. The effect is to attach a sphere, called a bubble, to the original domain at the concentration point and to extend the map across the sphere. The rescaled map may still have energy concentrating at one or more points, so one must rescale iteratively, eventually attaching an entire bubble tree onto the original domain, with the map well-behaved on each smooth component of the new domain.
In order to make this precise, define a stable map to be a pseudoholomorphic map from a Riemann surface with at worst nodal singularities, such that there are only finitely many automorphisms of the map. Concretely, this means the following. A smooth component of a nodal Riemann surface is said to be stable if there are at most finitely many automorphisms preserving its marked and nodal points. Then a stable map is a pseudoholomorphic map with at least one stable domain component, such that for each of the other domain components
• the map is nonconstant on that component, or
• that component is stable.
It is significant that the domain of a stable map need not be a stable curve. However, one can contract its unstable components (iteratively) to produce a stable curve, called the stabilization $\mathrm {st} (C)$ of the domain $C$.
The set of all stable maps from Riemann surfaces of genus $g$ with $n$ marked points forms a moduli space
${\overline {M}}_{g,n}^{J,\nu }(X,A).$
The topology is defined by declaring that a sequence of stable maps converges if and only if
• their (stabilized) domains converge in the Deligne–Mumford moduli space of curves ${\overline {M}}_{g,n}$,
• they converge uniformly in all derivatives on compact subsets away from the nodes, and
• the energy concentrating at any point equals the energy in the bubble tree attached at that point in the limit map.
The moduli space of stable maps is compact; that is, any sequence of stable maps converges to a stable map. To show this, one iteratively rescales the sequence of maps. At each iteration there is a new limit domain, possibly singular, with less energy concentration than in the previous iteration. At this step the symplectic form $\omega $ enters in a crucial way. The energy of any smooth map representing the homology class $B$ is bounded below by the symplectic area $\omega (B)$,
$\omega (B)\leq {\frac {1}{2}}\int |df|^{2},$
with equality if and only if the map is pseudoholomorphic. This bounds the energy captured in each iteration of the rescaling and thus implies that only finitely many rescalings are needed to capture all of the energy. In the end, the limit map on the new limit domain is stable.
The compactified space is again a smooth, oriented orbifold. Maps with nontrivial automorphisms correspond to points with isotropy in the orbifold.
The Gromov–Witten pseudocycle
To construct Gromov–Witten invariants, one pushes the moduli space of stable maps forward under the evaluation map
$M_{g,n}^{J,\nu }(X,A)\to {\overline {M}}_{g,n}\times X^{n},$
$((C,j),f,(x_{1},\ldots ,x_{n}))\mapsto (\mathrm {st} (C,j),f(x_{1}),\ldots ,f(x_{n}))$
to obtain, under suitable conditions, a rational homology class
$GW_{g,n}^{X,A}\in H_{d}({\overline {M}}_{g,n}\times X^{n},\mathbb {Q} ).$
Rational coefficients are necessary because the moduli space is an orbifold. The homology class defined by the evaluation map is independent of the choice of generic $\omega $-tame $J$ and perturbation $\nu $. It is called the Gromov–Witten (GW) invariant of $X$ for the given data $g$, $n$, and $A$. A cobordism argument can be used to show that this homology class is independent of the choice of $\omega $, up to isotopy. Thus Gromov–Witten invariants are invariants of symplectic isotopy classes of symplectic manifolds.
The "suitable conditions" are rather subtle, primarily because multiply covered maps (maps that factor through a branched covering of the domain) can form moduli spaces of larger dimension than expected.
The simplest way to handle this is to assume that the target manifold $X$ is semipositive or Fano in a certain sense. This assumption is chosen exactly so that the moduli space of multiply covered maps has codimension at least two in the space of non-multiply-covered maps. Then the image of the evaluation map forms a pseudocycle, which induces a well-defined homology class of the expected dimension.
Defining Gromov–Witten invariants without assuming some kind of semipositivity requires a difficult, technical construction known as the virtual moduli cycle.
References
• Dusa McDuff and Dietmar Salamon, J-Holomorphic Curves and Symplectic Topology, American Mathematical Society colloquium publications, 2004. ISBN 0-8218-3485-1.
• Kontsevich, Maxim (1995). "Enumeration of rational curves via torus actions". Progr. Math. 129: 335–368. MR 1363062.
| Wikipedia |
Printed from https://ideas.repec.org/a/bla/ijethy/v6y2010i1p149-165.html
My bibliography Save this article
Fixed‐step anonymous overtaking and catching‐up
Geir B. Asheim
Kuntal Banerjee
We investigate criteria for evaluating infinite utility streams that satisfy fixed‐step anonymity and include some notion of overtaking or catching‐up. We do so in a generalized setting that does not require us to specify the underlying finite‐dimensional criterion (e.g. utilitarianism or leximin). We present axiomatizations that rely on weaker axioms than those in the literature, and which in one case is new. We also provide a complete analysis of the relationships between the symmetric parts of these criteria and likewise for the asymmetric parts.
Geir B. Asheim & Kuntal Banerjee, 2010. "Fixed‐step anonymous overtaking and catching‐up," International Journal of Economic Theory, The International Society for Economic Theory, vol. 6(1), pages 149-165, March.
Handle: RePEc:bla:ijethy:v:6:y:2010:i:1:p:149-165
DOI: 10.1111/j.1742-7363.2009.00127.x
File URL: https://doi.org/10.1111/j.1742-7363.2009.00127.x
File URL: https://libkey.io/10.1111/j.1742-7363.2009.00127.x?utm_source=ideas
LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
Geir B. Asheim & Kuntal Banerjee, 2009. "Fixed-step anonymous overtaking and catching-up," Working Papers 09001, Department of Economics, College of Business, Florida Atlantic University.
Fleurbaey, Marc & Michel, Philippe, 2003. "Intertemporal equity and the extension of the Ramsey criterion," Journal of Mathematical Economics, Elsevier, vol. 39(7), pages 777-802, September.
FLEURBAEY, Marc & MICHEL, Philippe, 1997. "Intertemporal equity and the extension of the Ramsey criterion," LIDAM Discussion Papers CORE 1997004, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
M. Fleurbaey & P. Michel, 1997. "Intertemporal equity and the extension of the Ramsey criterion," THEMA Working Papers 97-11, THEMA (THéorie Economique, Modélisation et Applications), Université de Cergy-Pontoise.
Asheim, Geir B. & d'Aspremont, Claude & Banerjee, Kuntal, 2010. "Generalized time-invariant overtaking," Journal of Mathematical Economics, Elsevier, vol. 46(4), pages 519-533, July.
Asheim, Geir B. & d'Aspremont, Claude & Banerjee, Kuntal, 2008. "Generalized time-invariant overtaking," PIE/CIS Discussion Paper 394, Center for Intergenerational Studies, Institute of Economic Research, Hitotsubashi University.
ASHEIM, Geir B. & d'ASPREMONT, Claude & BANERJEE, Kuntal, 2010. "Generalized time-invariant overtaking," LIDAM Reprints CORE 2239, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
Geir B, ASHEIM & Claude, D'ASPREMONT & Kuntal, BANERJEE, 2008. "Generalized time-invariant overtaking," Discussion Papers (ECON - Département des Sciences Economiques) 2008044, Université catholique de Louvain, Département des Sciences Economiques.
Geir B. Asheim & Claude d'Aspremont & Kuntal Banerjee, 2008. "Generalized time-invariant overtaking," Working Papers 08004, Department of Economics, College of Business, Florida Atlantic University.
Geir B. , ASHEIM & Claude, DASPREMONT & Kuntal, BANERJEE, 2008. "Generalized time-invariant overtaking," Discussion Papers (ECON - Département des Sciences Economiques) 2008039, Université catholique de Louvain, Département des Sciences Economiques.
ASHEIM, Geir B. & d'ASPREMONT, Claude & BANERJEE, Kuntal, 2008. "Generalized time-invariant overtaking," LIDAM Discussion Papers CORE 2008065, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
Toyotaka Sakai, 2010. "Intergenerational equity and an explicit construction of welfare criteria," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 35(3), pages 393-414, September.
Sakai, Toyotaka, 2008. "Intergenerational equity and an explicit construction of welfare criteria," PIE/CIS Discussion Paper 395, Center for Intergenerational Studies, Institute of Economic Research, Hitotsubashi University.
Claude, d'ASPREMONT, 2005. "Formal welfarism and intergenerational equity," Discussion Papers (ECON - Département des Sciences Economiques) 2005051, Université catholique de Louvain, Département des Sciences Economiques.
d'ASPREMONT, Claude, 2005. "Formal welfarism and intergenerational equity," LIDAM Discussion Papers CORE 2005075, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
d'ASPREMONT, Claude, 2007. "Formal welfarism and intergenerational equity," LIDAM Reprints CORE 2047, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
Claude D'Aspremont & Louis Gevers, 1977. "Equity and the Informational Basis of Collective Choice," Review of Economic Studies, Oxford University Press, vol. 44(2), pages 199-209.
d'ASPREMONT, Claude & GEVERS, Louis, 1977. "Equity and the informational basis of collective choice," LIDAM Reprints CORE 350, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
Lauwers, Luc, 2010. "Ordering infinite utility streams comes at the cost of a non-Ramsey set," Journal of Mathematical Economics, Elsevier, vol. 46(1), pages 32-37, January.
Luc LAUWERS, 2009. "Ordering infinite utility streams comes at the cost of a non-Ramsey set," Working Papers of Department of Economics, Leuven ces09.05, KU Leuven, Faculty of Economics and Business (FEB), Department of Economics, Leuven.
Tjalling C. Koopmans, 1959. "Stationary Ordinal Utility and Impatience," Cowles Foundation Discussion Papers 81, Cowles Foundation for Research in Economics, Yale University.
Kuntal Banerjee, 2006. "On the Extension of the Utilitarian and Suppes–Sen Social Welfare Relations to Infinite Utility Streams," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 27(2), pages 327-339, October.
Geir Asheim & Bertil Tungodden, 2004. "Resolving distributional conflicts between generations," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 24(1), pages 221-230, July.
Luc Lauwers, 1996. "Rawlsian equity and generalised utilitarianism with an infinite population (*)," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 9(1), pages 143-150.
Van Liedekerke, Luc & Lauwers, Luc, 1997. "Sacrificing the Patrol: Utilitarianism, Future Generations and Infinity," Economics and Philosophy, Cambridge University Press, vol. 13(2), pages 159-174, October.
Basu, Kaushik & Mitra, Tapan, 2005. "On the Existence of Paretian Social Welfare Relations for Infinite Utility Streams with Extended Anonymity," Working Papers 05-06, Cornell University, Center for Analytic Economics.
Claude d'Aspremont, 2007. "Formal Welfarism and Intergenerational Equity," International Economic Association Series, in: John Roemer & Kotaro Suzumura (ed.), Intergenerational Equity and Sustainability, chapter 8, pages 113-130, Palgrave Macmillan.
Bossert, Walter & Sprumont, Yves & Suzumura, Kotaro, 2007. "Ordering infinite utility streams," Journal of Economic Theory, Elsevier, vol. 135(1), pages 579-589, July.
Kohei Kamaga & Takashi Kojima, 2010. "On the leximin and utilitarian overtaking criteria with extended anonymity," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 35(3), pages 377-392, September.
Kamaga, Kohei & Kojima, Takashi, 2008. "On the leximin and utilitarian overtaking criteria with extended anonymity," PIE/CIS Discussion Paper 392, Center for Intergenerational Studies, Institute of Economic Research, Hitotsubashi University.
Kamaga, Kohei & Kojima, Takashi, 2008. "Q-anonymous social welfare relations on infinite utility streams," PIE/CIS Discussion Paper 391, Center for Intergenerational Studies, Institute of Economic Research, Hitotsubashi University.
Kohei Kamaga & Takashi Kojima, 2009. "$${\mathcal{Q}}$$ -anonymous social welfare relations on infinite utility streams," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 33(3), pages 405-413, September.
Hiroshi Atsumi, 1965. "Neoclassical Growth and the Efficient Program of Capital Accumulation," Review of Economic Studies, Oxford University Press, vol. 32(2), pages 127-136.
, R., 2007. "Can intergenerational equity be operationalized?," Theoretical Economics, Econometric Society, vol. 2(2), June.
Basu, Kaushik & Mitra, Tapan, 2007. "Utilitarianism for infinite utility streams: A new welfare criterion and its axiomatic characterization," Journal of Economic Theory, Elsevier, vol. 133(1), pages 350-373, March.
Basu, Kaushik & Mitra, Tapan, 2003. "Utilitarianism for Infinite Utility Streams: A New Welfare Criterion and Its Axiomatic Characterization," Working Papers 03-05, Cornell University, Center for Analytic Economics.
Svensson, Lars-Gunnar, 1980. "Equity among Generations," Econometrica, Econometric Society, vol. 48(5), pages 1251-1256, July.
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
Michele Lombardi & Kaname Miyagishima & Roberto Veneziani, 2016. "Liberal Egalitarianism and the Harm Principle," Economic Journal, Royal Economic Society, vol. 126(597), pages 2173-2196, November.
Michele Lombardi & Roberto Veneziani, 2009. "Liberal Egalitarianism and the Harm Principle," Global COE Hi-Stat Discussion Paper Series gd09-078, Institute of Economic Research, Hitotsubashi University.
Michele Lombardi & Roberto Veneziani, 2009. "Liberal Egalitarianism and the Harm Principle," Working Papers 649, Queen Mary University of London, School of Economics and Finance.
Lombardi, Michele & Miyagishima, Kaname & Veneziani, Roberto, 2013. "Liberal Egalitarianism and the Harm Principle," MPRA Paper 48458, University Library of Munich, Germany.
Michele Lombardi & Kahame Miyagishima & Roberto Veneziani, 2013. "Liberal Egalitarianism and the Harm Principle," UMASS Amherst Economics Working Papers 2013-07, University of Massachusetts Amherst, Department of Economics.
Jonsson, Adam & Voorneveld, Mark, 2018. "The limit of discounted utilitarianism," Theoretical Economics, Econometric Society, vol. 13(1), January.
Jonsson, Adam & Voorneveld, Mark, 2014. "The limit of discounted utilitarianism," SSE/EFI Working Paper Series in Economics and Finance 748, Stockholm School of Economics, revised 01 Feb 2017.
Luc Lauwers, 2016. "Intergenerational Equity, Efficiency, and Constructibility," Studies in Economic Theory, in: Graciela Chichilnisky & Armon Rezai (ed.), The Economics of the Global Environment, pages 191-206, Springer.
Luc Lauwers, 2012. "Intergenerational equity, efficiency, and constructibility," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 49(2), pages 227-242, February.
Luc LAUWERS, 2010. "Intergenerational equity, efficiency and constructability," Working Papers of Department of Economics, Leuven ces10.22, KU Leuven, Faculty of Economics and Business (FEB), Department of Economics, Leuven.
Marco Mariotti & Roberto Veneziani, 2018. "Opportunities as Chances: Maximising the Probability that Everybody Succeeds," Economic Journal, Royal Economic Society, vol. 128(611), pages 1609-1633, June.
Marco, Mariotti & Roberto, Veneziani, 2012. "Opportunities as chances: maximising the probability that everybody succeeds," MPRA Paper 41884, University Library of Munich, Germany.
Marco Mariotti & Roberto Veneziani, 2012. "Opportunities as chances: maximising the probability that everybody succeeds," UMASS Amherst Economics Working Papers 2012-09, University of Massachusetts Amherst, Department of Economics.
Geir Asheim & Stéphane Zuber, 2013. "A complete and strongly anonymous leximin relation on infinite streams," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 41(4), pages 819-834, October.
Geir B. Asheim & Stéphane Zuber, 2011. "A Complete and Strongly Anonymous Leximin Relation on Infinite Streams," CESifo Working Paper Series 3578, CESifo.
Geir B. Asheim & Stéphane Zuber, 2013. "A complete and strongly anonymous leximin relation on infinite streams," Post-Print hal-00979780, HAL.
Geir B. Asheim & Stéphane Zuber, 2013. "A complete and strongly anonymous leximin relation on infinite streams," Université Paris1 Panthéon-Sorbonne (Post-Print and Working Papers) hal-00979780, HAL.
Adachi, Tsuyoshi & Cato, Susumu & Kamaga, Kohei, 2014. "Extended anonymity and Paretian relations on infinite utility streams," Mathematical Social Sciences, Elsevier, vol. 72(C), pages 24-32.
Geir B. Asheim, 2014. "Equitable intergenerational preferences and sustainability," Chapters, in: Giles Atkinson & Simon Dietz & Eric Neumayer & Matthew Agarwala (ed.), Handbook of Sustainable Development, chapter 8, pages 125-139, Edward Elgar Publishing.
Sakai, Toyotaka, 2010. "A characterization and an impossibility of finite length anonymity for infinite generations," Journal of Mathematical Economics, Elsevier, vol. 46(5), pages 877-883, September.
Alcantud, José Carlos R. & García-Sanz, María D., 2009. "A comment on "Intergenerational equity: sup, inf, lim sup, and lim inf"," MPRA Paper 14763, University Library of Munich, Germany.
Kohei Kamaga, 2016. "Infinite-horizon social evaluation with variable population size," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 47(1), pages 207-232, June.
Marcus Pivato, 2014. "Additive representation of separable preferences over infinite products," Theory and Decision, Springer, vol. 77(1), pages 31-83, June.
Pivato, Marcus, 2011. "Additive representation of separable preferences over infinite products," MPRA Paper 28262, University Library of Munich, Germany.
Marcus Pivato, 2014. "Additive representation of separable preferences over infinite products," Post-Print hal-02979672, HAL.
José Carlos R. Alcantud & María D. García-Sanz, 2013. "Evaluations of Infinite Utility Streams: Pareto Efficient and Egalitarian Axiomatics," Metroeconomica, Wiley Blackwell, vol. 64(3), pages 432-447, July.
Alcantud, José Carlos R. & García-Sanz, María D., 2010. "Evaluations of infinite utility streams: Pareto-efficient and egalitarian axiomatics," MPRA Paper 20133, University Library of Munich, Germany.
Susumu Cato, 2009. "Characterizing the Nash social welfare relation for infinite utility streams: a note," Economics Bulletin, AccessEcon, vol. 29(3), pages 2372-2379.
Toyotaka Sakai, 2016. "Limit representations of intergenerational equity," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 47(2), pages 481-500, August.
Mariotti, Marco & Veneziani, Roberto, 2012. "Allocating chances of success in finite and infinite societies: The utilitarian criterion," Journal of Mathematical Economics, Elsevier, vol. 48(4), pages 226-236.
Christopher Chambers, 2009. "Intergenerational equity: sup, inf, lim sup, and lim inf," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 32(2), pages 243-252, February.
Geir B. Asheim & Tapan Mitra & Bertil Tungodden, 2016. "Sustainable Recursive Social Welfare Functions," Studies in Economic Theory, in: Graciela Chichilnisky & Armon Rezai (ed.), The Economics of the Global Environment, pages 165-190, Springer.
Geir Asheim & Tapan Mitra & Bertil Tungodden, 2012. "Sustainable recursive social welfare functions," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 49(2), pages 267-292, February.
Asheim, Geir B. & Mitra, Tapan & Tungodden, Bertil, 2006. "Sustainable recursive social welfare functions," Memorandum 18/2006, Oslo University, Department of Economics.
Zuber, Stéphane & Asheim, Geir B., 2012. "Justifying social discounting: The rank-discounted utilitarian approach," Journal of Economic Theory, Elsevier, vol. 147(4), pages 1572-1601.
Stéphane Zuber & Geir B. Asheim, 2010. "Justifying Social Discounting: The Rank-Discounted Utilitarian Approach," CESifo Working Paper Series 3192, CESifo.
ZUBER, Stéphane, 2010. "Justifying social discounting: the rank-discounted utilitarian approach," LIDAM Discussion Papers CORE 2010036, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE).
Alvarez-Cuadrado, Francisco & Van Long, Ngo, 2009. "A mixed Bentham-Rawls criterion for intergenerational equity: Theory and implications," Journal of Environmental Economics and Management, Elsevier, vol. 58(2), pages 154-168, September.
ALVAREZ-CUADRADO, Francisco & LONG, Ngo Van, 2007. "A Mixed Bentham-Rawls Criterion for Intergenerational Equity : Theory and Implications," Cahiers de recherche 06-2007, Centre interuniversitaire de recherche en économie quantitative, CIREQ.
Francisco Alvarez-Cuadrado & Ngo Van Long, 2007. "A Mixed Bentham-Rawls Criterion For Intergenerational Equity: Theory And Implications," Departmental Working Papers 2007-03, McGill University, Department of Economics.
D63 - Microeconomics - - Welfare Economics - - - Equity, Justice, Inequality, and Other Normative Criteria and Measurement
D71 - Microeconomics - - Analysis of Collective Decision-Making - - - Social Choice; Clubs; Committees; Associations
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:bla:ijethy:v:6:y:2010:i:1:p:149-165. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: . General contact details of provider: http://www.blackwellpublishing.com/journal.asp?ref=1742-7355 .
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: http://www.blackwellpublishing.com/journal.asp?ref=1742-7355 . | CommonCrawl |
What is $0^{(5^{6431564})}$?
Zero raised to any positive power is zero, so our answer is $\boxed{0}.$ | Math Dataset |
\begin{document}
\allowdisplaybreaks
\begin{abstract}
Let $\mathfrak{g}$ be a complex semisimple Lie algebra. We give a description of characters of irreducible Whittaker modules for $\mathfrak{g}$ with any infinitesimal character, along with a Kazhdan-Lusztig algorithm for computing them. This generalizes Mili\v ci\'c-Soergel's and Romanov's results for integral infinitesimal characters. As a special case, we recover the non-integral Kazhdan-Lusztig conjecture for Verma modules. \end{abstract}
\maketitle \tableofcontents
\section{Introduction}\label{sec:intro}
Let $\mathfrak{g}$ be a complex semisimple Lie algebra. Let $\mathfrak{n} \subset \mathfrak{g}$ be a maximal nilpotent subalgebra, and let $\mathcal{Z}(\mathfrak{g})$ denote the center of the enveloping algebra $\mathcal{U}(\mathfrak{g})$ of $\mathfrak{g}$. This paper studies \textit{Whittaker modules} which are finitely generated $\mathfrak{g}$-modules that are locally finite over both $\mathfrak{n}$ and $\mathcal{Z}(\mathfrak{g})$. They originate from the study of Whittaker functionals for reductive groups which were first considered by Jacquet \cite{Jacquet:Whittaker} to study principal series representations of Chevalley groups. In the context of group representations, a Whittaker functional is, roughly speaking, a linear functional on a smooth representation of a reductive group $G$ (over a local field) that transforms according to a one dimensional representation of a unipotent subgroup $N$. The Whittaker modules we consider are natural analogous (for Lie algebras) of representations generated by a Whittaker functional.
Kostant first studied Whittaker modules in his beautiful paper \cite{Kostant:Whittaker}. He showed that when $\eta$ is non-degenerate (see \textsection\ref{subsec:Wh_prelim} for the definition of non-degeneracy) cyclic Whittaker modules (modules generated by a Whittaker functional) that admit infinitesimal characters are irreducible. Later Mili\v ci\'c-Soergel \cite[\textsection 5]{Milicic-Soergel:Whittaker_geometric} showed that the category of Whittaker modules $\mathcal{N}_\eta$ corresponding to a non-degenerate character $\eta$ of $\mathfrak{n}$ (defined in \textsection\ref{subsec:Wh_prelim}) is equivalent to the category of finite dimensional $\mathcal{Z}(\mathfrak{g})$-modules, and the subcategory $\mathcal{N}_{\theta,\eta}$ of modules on which $\mathcal{Z}(\mathfrak{g})$ acts by a fixed infinitesimal character $\chi_\theta$ (defined at the beginning of \textsection\ref{sec:prelim}) is semisimple. It is then natural to ask for a description of the category $\mathcal{N}_{\theta,\eta}$ in the degenerate case and, in particular, a description of composition series of cyclic modules. Towards this direction, McDowell \cite{McDowell:Whittaker} constructed and studied \textit{standard Whittaker modules} (Definition \ref{def:std_Wh_mods}), which are analogs of Verma modules. Using algebraic methods, Mili\v ci\'c and Soergel later showed that cyclic modules are filtered by standard modules (in fact a direct sum of standard modules in nice cases), and composition series of standard modules are described when the infinitesimal character $\chi_\theta$ is integral and regular \cite{Milicic-Soergel:Whittaker_algebraic}. Here integrality is a usual assumption and is the ``basic case'' compared to general infinitesimal characters.
It was observed by Mili\v ci\'c-Soergel \cite{Milicic-Soergel:Whittaker_geometric} that Kostant's result in the non-degenerate case has an alternative proof based on the localization theory of Beilinson-Bernstein \cite{Beilinson-Bernstein:Localization}, and the degenerate case should be also solvable using localization, similar to the solution of the Kazhdan-Lusztig conjecture for Verma modules. However, the latter is based on the decomposition theorem for perverse sheaves (or equivalently regular holonomic $\mathcal{D}$-modules) due to Beilinson-Bernstein-Deligne \cite{Beilinson-Bernstein-Deligne:Decomposition} which does not apply to localizations of Whittaker modules \--- they have irregular singularities. Therefore, the argument for Verma modules was not translated to Whittaker modules until a decomposition theorem for general holonomic $\mathcal{D}$-modules was proven by Mochizuki \cite{Mochizuki:Decomp}. Based on Mochizuki's result, Romanov proved an algorithm for computing composition series of standard Whittaker modules with integral regular infinitesimal characters \cite{Romanov:Whittaker}. Along the way, she developed a character theory for $\mathcal{N}_{\theta,\eta}$ and computed characters of standard modules.
Despite the success of the geometric methods, the results of Mili\v ci\'c-Soergel and Romanov were not extended beyond integral regular infinitesimal characters. This paper fulfills the gap. Namely, we prove a Kazhdan-Lusztig algorithm for Whittaker modules for regular infinitesimal characters, and deduce from the algorithm a character formula for irreducible Whittaker modules for arbitrary infinitesimal characters. In particular, we recover the non-integral Kazhdan-Lusztig conjecture for Verma modules.
\subsection{The character formula}\label{subsec:main_results}
To state the character formula, let us introduce more notations. We choose a Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{g}$ normalizing $\mathfrak{n}$ (so that $\mathfrak{b} = \mathfrak{h} + \mathfrak{n}$ is a Borel subalgebra). Let $G$ be a connected algebraic group over $\mathbb{C}$ with Lie algebra $\mathfrak{g}$ and write $N$ for the unipotent subgroup with Lie algebra $\mathfrak{n}$. We write $\Sigma \supset \Sigma^+ \supset \Pi$ and $W$ for the set of roots of $(\mathfrak{g},\mathfrak{h})$, the set of positive roots determined by $\mathfrak{n}$, the set of simple roots, and the Weyl group of $\Sigma$, respectively. We set \begin{equation}\label{eqn:defn_of_Theta_intro}
\Theta = \{ \alpha \in \Pi \mid \eta \text{ is nonzero on the $\alpha$-root space in } \mathfrak{n} \} \end{equation} and let $\lambda \in \mathfrak{h}^*$. We use a subscript $\Theta$ (resp. $\lambda$) on objects to denote corresponding subobjects that are defined by $\Theta$ (resp. integral to $\lambda$). So $\Sigma_\Theta$ is the subsystem of $\Sigma$ generated by $\Theta$; the parabolic subgroup $W_\Theta$ is the Weyl group of $\Sigma_\Theta$, embedded as a subgroup of $W$; the integral root system $\Sigma_\lambda$ consists of those roots $\alpha \in \Sigma$ so that $\alpha^\vee(\lambda) \in \mathbb{Z}$, where $\alpha^\vee$ is the coroot of $\alpha$; the set of positive integral roots $\Sigma_\lambda^+$ is defined to be $\Sigma_\lambda \cap \Sigma^+$ and $\Pi_\lambda \subseteq \Sigma_\lambda^+$ is the corresponding set of simple roots (which may not be simple in $\Sigma^+$); the integral Weyl group $W_\lambda$ is the Weyl group of $\Sigma_\lambda$, which can be embedded in $W$ as $\{w \in W \mid w \lambda - \lambda \in \mathbb{Z} \cdot \Sigma \}$. We write $\theta = W \cdot \lambda$ for the Weyl group orbit of $\lambda$.
Let us fix a $\lambda$ that is antidominant regular with respect to $\Sigma^+$. This means $\alpha^\vee(\lambda)$ is not a non-negative integer for all $\alpha \in \Sigma^+$. The Grothendieck group of the category $\mathcal{N}_{\theta,\eta}$ has two natural basis: one is given by McDowell's standard Whittaker modules $M(w^C \lambda, \eta)$, and another is given by irreducible quotients $L(w^C \lambda,\eta)$ of the standard modules, both labeled by right $W_\Theta$-cosets $C$ in $W$. Here, $w^C$ denotes the unique longest element in $C$ under the Bruhat order. Romanov defined a character map $\ch$ on objects of $\mathcal{N}_{\theta,\eta}$ that factors through the Grothendieck group \cite[\textsection 2.2]{Romanov:Whittaker}. We aim to express the character of $L(w^C \lambda,\eta)$ in terms of the characters of standard modules $M(w^D \lambda,\eta)$ (the latter are computed by Romanov). These facts about Whittaker modules will be recalled in \textsection\ref{subsec:Wh_prelim}.
The precise expression of the character involves combinatorial data extracted from double cosets $W_\Theta \backslash W / W_\lambda$. Each double coset $W_\Theta u W_\lambda$ contains a unique shortest element $u$ with respect to Bruhat order (Corollary \ref{thm:cross-section_db_coset}). We can then take the intersections of $u W_\lambda$ with various right $W_\Theta$-cosets in $W_\Theta u W_\lambda$. This produces a partition of $u W_\lambda$. Left-translating back into $W_\lambda$, we obtain a partition of $W_\lambda$, which coincides with the partition given by right $W_{\lambda,\Theta(u,\lambda)}$-cosets of $W_\lambda$ (Proposition \ref{lem:int_Whittaker_model}). Here, $W_{\lambda,\Theta(u,\lambda)}$ is a parabolic subgroup of $W_\lambda$ corresponding to the subset of simple roots $\Theta(u,\lambda) = u^{-1} \Sigma_\Theta \cap \Pi_\lambda \subseteq \Pi_\lambda$. We thus obtain a map from the set of right $W_\Theta$-cosets in $W_\Theta u W_\lambda$ to the set of right $W_{\lambda,\Theta(u,\lambda)}$-cosets in $W_\lambda$, i.e. a map
\begin{equation}\label{eqn:defn_of_(-)|_lambda_intro}
(-)|_\lambda: W_\Theta \backslash W_\Theta u W_\lambda \to W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda \end{equation} (Notation \ref{not:right_coset_partition}). Recall that there is a partial order $\leqslant$ on $W_\Theta \backslash W$ inherited from the restriction of Bruhat order to the set of the longest element in each coset (see \textsection\ref{subsec:WTheta_prelim}). We denote the partial order on $W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda$ by $\leqslant_{u,\lambda}$.
The double cosets reflect the block decomposition of $\mathcal{N}_{\theta,\eta}$ (here a ``block'' means an indecomposable direct summand of $\mathcal{N}_{\theta,\eta}$). On the level of character formula, $\ch M(w^D \lambda,\eta)$ ($\ch$ denotes the character map) appears in $\ch L(w^C \lambda,\eta)$ only if $D$ and $C$ are in the same double coset $W_\Theta u W_\lambda$ and $D|_\lambda \leqslant_{u,\lambda} C|_\lambda$ (for which we will simply write $D \leqslant_{u,\lambda} C$). The precise coefficient of $\ch M(w^D \lambda,\eta)$ is described by Whittaker Kazhdan-Lusztig polynomials. For a triple $(W,\Pi,\Theta)$, Whittaker Kazhdan-Lusztig polynomials are polynomials $P_{CD} \in \mathbb{Z}[q]$ labeled by pairs $(C,D)$ of right $W_\Theta$-cosets with $D \leqslant C$ (Definition/Theorem \ref{def:parabolic_KL_poly_Theta}). These polynomials compute (at $q = -1$) the character formula of irreducible Whittaker modules for integral infinitesimal characters. Applied to the triple $(W_\lambda, \Pi_\lambda, \Theta(u,\lambda))$ and the pair $(C|_\lambda,D|_\lambda)$, we obtain polynomials $P_{CD}^{u,\lambda} = P_{C|_\lambda, D|_\lambda}^{u,\lambda}$ (Definition/Theorem \ref{def:parabolic_KL_poly}, or see \ref{enum:WKL_basis_expansion} and \ref{enum:WKL_basis_U} in \textsection\ref{subsec:geom_idea}).
\begin{theorem}[Character formula: regular case]\label{thm:multiplicity_intro}
Let $\lambda$ be antidominant regular. For any $C \in W_\Theta \backslash W$, let $W_\Theta u W_\lambda$ be the double coset containing $C$, where $u$ is the unique shortest element in this double coset. Then
\begin{equation*}
\ch L(w^C\lambda,\eta) = \ch M(w^C\lambda,\eta) +
\sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda\\%
D <_{u,\lambda} C}}
P_{CD}^{u,\lambda}(-1) \ch M(w^D \lambda,\eta),
\end{equation*} \end{theorem}
This appears as Theorem \ref{thm:multiplicity} below. We also extend this to singular $\lambda$ in Theorem \ref{thm:multiplicity_singular}. At the special case $\eta = 0$, we recover the non-integral Kazhdan-Lusztig conjecture for Verma modules.
The above formula follows from an algorithm (so called Kazhdan-Lusztig algorithm), namely Theorem \ref{thm:KL_alg}. The proof of the algorithm is done by studying (weakly) equivariant $\mathcal{D}$-modules.
We postpone the statement of the algorithm to the next subsection.
\subsection{The Kazhdan-Lusztig algorithms and an outline of proof}\label{subsec:geom_idea}
Before discussing the extension to non-integral infinitesimal characters, let us first discuss Romanov's work in the integral case. Her argument is in the same spirit as the algorithm for highest weight modules which we now recall. For the backgrounds of localization theory relevant to Whittaker modules, we refer the readers to \textsection\ref{subsec:geom_prelim}.
For a fixed integral regular infinitesimal character $\chi_\theta$, the Grothendieck group $K\mathcal{N}_{\theta,0}$ of the highest weight category $\mathcal{N}_{\theta,0}$ is free abelian with two natural bases labeled by $W$: one is given by irreducible objects $L_w$, and the other given by Verma modules $M_w$. The original Kazhdan-Lusztig conjecture is a description of the change of basis matrix. The strategy of the proof of the conjecture is to study $\mathcal{D}$-modules on the flag variety $X$ corresponding to highest weight modules. To say more detail, recall that any infinitesimal character $\chi_\theta: \mathcal{Z}(\mathfrak{g}) \to \mathbb{C}$ is determined by a Weyl group orbit $\theta$ in $\mathfrak{h}^*$ (we recall the definition of $\chi_\theta$ in \textsection\ref{sec:prelim}). Let $\lambda \in \theta$ be an antidominant element with respect to roots in $\Sigma^+$. Beilinson-Bernstein's localization theory \cite{Beilinson-Bernstein:Localization} gives us an equivalence of categories \begin{equation*}
\mathcal{N}_{\theta,0} \cong \Mod_{coh}(\mathcal{D}_\lambda,N). \end{equation*} Here, $\mathcal{D}_\lambda$ is a certain twisted sheaf of differential operators on the flag variety $X$ of $\mathfrak{g}$, and the category $\Mod_{coh}(\mathcal{D}_\lambda,N)$ is the category of $N$-equivariant coherent $\mathcal{D}_\lambda$-modules on $X$. The images $\mathcal{L}_w$ and $\mathcal{M}_w$ of $L_w$ and $M_w$ have natural geometric meanings. We call $\mathcal{M}_w$ the \textit{costandard modules}. If we write $[\mathcal{L}_w]$ and $[\mathcal{M}_w]$ for their classes in the Grothendieck group $K \Mod_{coh}(\mathcal{D}_\lambda,N)$, the Kazhdan-Lusztig conjecture is now the problem of expressing $[\mathcal{L}_w]$ as a linear combination of $[\mathcal{M}_v]$ in the Grothendieck group.
To relate this problem with the combinatorics in \cite{Kazhdan-Lusztig:Hecke_Alg}, one aims to build a comparision map $\nu'$ that fits into the commutative diagram \begin{equation*}
\begin{tikzcd}
\Mod_{coh}(\mathcal{D}_\lambda,N) \ar[d, "{[-]}"'] \ar[r, "\nu'"]
& \mathcal{H} \ar[d, "q=-1"]\\
K\Mod_{coh}(\mathcal{D}_\lambda,N) \ar[r, "\cong"]
& \mathbb{Z}[W]
\end{tikzcd}. \end{equation*}
In this diagram, the objects $\mathcal{H}$ and $\mathbb{Z}[W]$ are the Hecke algebra and the group algebra of $W$, respectively, and the bottom map sends $[\mathcal{M}_w]$ to the basis in $\mathbb{Z}[W]$ labeled by $w$. Moreover, the right regular action of $\mathcal{H}$ on the top right corner should lift to an $\mathcal{H}$ ``action'' on the $\mathcal{M}_w$ and $\mathcal{L}_w$ in $\Mod_{coh}(\mathcal{D}_\lambda,N)$. Once this diagram is constructed, then $[\mathcal{L}_w] = \nu'(\mathcal{L}_w)|_{q=-1}$ by commutativity of the diagram, and $\nu'(\mathcal{L}_w)$ can be computed by studying the $\mathcal{H}$-action.
In further detail, recall that the Hecke algebra $\mathcal{H}$ has an underlying free $\mathbb{Z}[q^{\pm1}]$-module structure with two bases labeled by $W$: the defining basis $\{\delta_w\}$ and the Kazhdan-Lusztig basis $\{C_w\}$ \cite{Kazhdan-Lusztig:Hecke_Alg}. The Kazhdan-Lusztig basis is characterized by three conditions: \begin{enumerate}[label=(KL.\arabic*)]
\item the expansion of $C_w$ in terms of the $\delta_v$ basis elements involve only those with $v \leqslant w$, the coefficient of $\delta_w$ is $1$, and the coefficient of $\delta_v$ for $v < w$ is a polynomial $P_{wv}(q)$ with no constant term; \label{enum:KL_basis_expansion}
\item the product $C_w C_s$, where $s$ is a simple reflection so that $ws > w$, is a $\mathbb{Z}$-linear combination of $C_v$ with $v \leqslant ws$; \label{enum:KL_basis_U}
\item $C_s = \delta_s + q$ \end{enumerate} (after some normalizations, the first two conditions are (1.1.b) and (2.3.b) of \cite{Kazhdan-Lusztig:Hecke_Alg}, respectively). Here $<$ and $\leqslant$ are the Bruhat order on $W$. These conditions inductively determine the Kazhdan-Lusztig basis and provide a recursive algorithm for computing it. The coefficients $P_{wv}$ of $\delta_v$ are the famous \textit{Kazhdan-Lusztig polynomials}. The Kazhdan-Lusztig conjecture predicts that the coefficients of the Verma modules in the irreducible modules in the Grothendieck group are given by Kazhdan-Lusztig polynomials evaluated at $-1$ (or at $1$, depending on the normalization). In view of the above diagram, proving the conjecture amounts to constructing $\nu'$ so that $\nu'(\mathcal{M}_w) = \delta_w$ and $\nu'(\mathcal{L}_w) = C_w$.
To this end, we define the map $\nu'$ by sending a $\mathcal{D}_\lambda$-module $\mathcal{F}$ to a linear combination of $\delta_v$ where the coefficient of $\delta_v$ is the generating function (in variable $q$) of the pullback of $\mathbb{D} \mathcal{F}$ to the Schubert cell $C(v)$: \begin{equation*}
\nu(\mathcal{F}) = \sum_{w \in W} \big( \chi_q i_w^! \mathbb{D} \mathcal{F} \big) \delta_w \end{equation*} (the map $\chi_q$ is defined in (\ref{eqn:defn_of_chi_q})). Here $\mathbb{D}$ is the duality functor of holonomic $\mathcal{D}$-modules. With this definition, the map $\nu'$ sends $\mathcal{M}_v$ to $\delta_v$, and $\nu'(\mathcal{L}_w)$ automatically satisfies condition \ref{enum:KL_basis_expansion} for support reason. Moreover, multiplication by $C_s$ on $\delta_w$ for a simple reflection $s$ lifts on $\mathcal{M}_w$ to the ``push-pull'' operation along the natural map $X \to X_s$ to the type-$s$ partial flag variety (we call this operation the \textit{$U$-functor} since it agrees with the functor $U$ defined by Vogan \cite[Definition 3.8]{Vogan:irred_char_I}). The condition \ref{enum:KL_basis_U} is proven by an induction on $\ell(w)$ by showing the same lifting for irreducibles, using the Decomposition Theorem of Beilinson-Bernstein-Deligne \cite{Beilinson-Bernstein-Deligne:Decomposition} for regular holonomic $\mathcal{D}$-modules (or perverse sheaves). This proves $\nu'(\mathcal{L}_w) = C_w$ and hence the Kazhdan-Lusztig conjecture. A detailed argument following these lines can be found in Mili\v ci\'c's unpublished notes \cite[Chapter 5]{Milicic:Localization}. Since the character map on highest weight modules factors through the Grothendieck group, one can write down characters of irreducible modules in terms of characters of Verma modules, and the latter can be easily computed.
The argument we just described naturally extends to parabolic highest weight categories corresponding to a subset $\Theta$ of simple roots and with regular integral infinitesimal characters. Two bases of the Grothendieck group are now given by parabolic Verma modules and their irreducible quotients, both labeled by right $W_\Theta$-cosets. The map $\nu'$ is now defined by pulling back to orbits of a parabolic subgroup $P_\Theta$ of type $\Theta$, and the image of the comparison map $\nu'$ is now replaced by a smaller $\mathcal{H}$-module. The Kazhdan-Lusztig polynomials are then replaced by \textit{parabolic Kazhdan-Lusztig polynomials}, which form a subset of the ordinary Kazhdan-Lusztig polynomials.
In the case of Whittaker modules with integral regular infinitesimal characters, we still have two bases of the Grothendieck group labeled by right $W_\Theta$-cosets: the standard Whittaker modules defined by McDowell and their irreducible quotients. Localizations of Whittaker modules now land into the category $\Mod_{coh}(\mathcal{D}_\lambda,N,\eta)$ of \textit{twisted Harish-Chandra sheaves} (defined in \textsection\ref{subsec:geom_prelim}). By the work of Mili\v ci\'c-Soergel \cite{Milicic-Soergel:Whittaker_algebraic}, the category of Whittaker modules is equivalent to the highest weight category with a singular infinitesimal character. The latter is known to be Koszul dual to a parabolic highest weight category with an integral regular infinitesimal character by the work of Beilinson-Ginzburg-Soergel \cite{Beilinson-Ginzburg-Soergel:Koszul_duality}. Therefore, the Kazhdan-Lusztig polynomials of Whittaker modules (what Romanov called \textit{Whittaker Kazhdan-Lusztig polynomials}) are expected to be dual to parabolic Kazhdan-Lusztig polynomials. More precisely, if we define $\Theta$ as in (\ref{eqn:defn_of_Theta_intro}), then the Whittaker category $\mathcal{N}_{\theta,\eta}$ is expected to be dual to the parabolic highest weight category determined by $\Theta$. A starting point towards proving this would be a Kazhdan-Lusztig algorithm Whittaker modules. However, the $\mathcal{D}$-modules in this situation are no longer regular holonomic (merely holonomic). Therefore a decomposition theorem for general holonomic modules is needed in order for the same argument to work. This is proven by Mochizuki \cite{Mochizuki:Decomp}. Romanov then adapted the strategy for highest weight modules to the case of Whittaker modules in her thesis (later published as \cite{Romanov:Whittaker}) and obtained a Kazhdan-Lusztig algorithm. Together with the character theory she developed, her work implies a character formula for irreducible Whittaker modules. The comparison map $\nu'$ in the highest weight setting now becomes a map \begin{equation*}
\Mod_{coh}(\mathcal{D}_\lambda,N,\eta) \xrightarrow{\nu'} \mathcal{H}_\Theta \end{equation*} defined by pulling back $\mathbb{D}\mathcal{F}$ to Schubert cells of the form $C(w^C)$, where $\mathcal{H}_\Theta$ is an $\mathcal{H}$-module whose underlying $\mathbb{Z}[q^{\pm1}]$-module structure is free with basis elements labeled by $W_\Theta \backslash W$. This $\mathcal{H}$-module structure defines a Kazhdan-Lusztig basis of $\mathcal{H}_\Theta$, whose elements coincide with the images of irreducible $\mathcal{D}$-modules under $\nu'$.
The work of this paper generalizes Romanov's algorithm to arbitrary infinitesimal characters. There are two extra complications compared to Romanov's situation. First, although (co)standard and irreducible Whittaker modules are still parameterized by $W_\Theta \backslash W$, now our category is a direct sum of smaller blocks, and different blocks have different sizes. On the other hand, the parabolic highest weight category can have fewer blocks, so the duality mentioned in the preceding paragraph fails. Nevertheless, one can expect the blocks to be parameterized by Weyl group data involving both $W_\Theta$ and $W_\lambda$. Indeed, it turns out that blocks are parameterized by double cosets $W_\Theta \backslash W / W_\lambda$, and the polynomials for each block turn out to be the same as (integral) Whittaker Kazhdan-Lusztig polynomials related to the integral Weyl group $W_\lambda$.
The second complication is that the ``push-pull'' operation along $X \to X_s$ does not exist when $\lambda$ is non-integral to $s$ \--- there is no sheaf of twisted differential operators on $X_s$ that pulls back to $\mathcal{D}_\lambda$. As a result, induction on $\ell(w)$ cannot proceed as before. To remedy this, we use the \textit{intertwining functor} $I_s$ (defined in \textsection\ref{sec:geom}) for non-integral $s$ in place of the $U$-functor. It is an equivalence of categories between $\mathcal{D}_\lambda$-modules and $\mathcal{D}_{s\lambda}$-modules. This allows us to increase $\ell(w)$ and retain induction hypotheses. This idea of proof is suggested to the author by Mili\v ci\'c.
We can now state our algorithm. We refer to \textsection\ref{subsec:KL_poly} for the precise definitions of the Hecke-theoretic objects appearing below. We fix a character $\eta: \mathfrak{n} \to \mathbb{C}$ and define a subset $\Theta$ of simple roots from $\eta$ as in (\ref{eqn:defn_of_Theta_intro}). For each $\lambda$ (not necessarily antidominant) and each $C \in W_\Theta \backslash W$, we write $\mathcal{M}(w^C,\lambda,\eta)$ for the costandard $\mathcal{D}$-module and $\mathcal{L}(w^C,\lambda,\eta)$ for its irreducible quotient (these are defined in \textsection \ref{subsec:geom_prelim}). In the case where $\lambda$ is antidominant regular, they are localizations of the standard Whittaker module $M(w^C \lambda,\eta)$ and the irreducible Whittaker module $L(w^C \lambda,\eta)$, respectively. We let $\mathcal{H}_\Theta$ be the free $\mathbb{Z}[q^{\pm1}]$-module with basis $\{\delta_C\}_{C \in W_\Theta \backslash W}$ and define a map $\nu'$ similar to the highest weight case \begin{equation*}
\nu': \Mod(\mathcal{D}_\lambda,N,\eta) \to \mathcal{H}_\Theta,\quad
\mathcal{F} \mapsto \sum_{C \in W_\Theta \backslash W} \big( \chi_q i_{w^C}^! \mathbb{D} \mathcal{F} \big) \delta_C \end{equation*} ($\chi_q$ is defined in (\ref{eqn:defn_of_chi_q}); in the body of the paper we instead work with $\nu = \nu' \circ \mathbb{D}$ (\ref{eqn:defn_of_nu}) instead of $\nu'$ for technical simplicities). It fits into the commutative diagram \begin{equation*}
\begin{tikzcd}
\Mod_{coh}(\mathcal{D}_\lambda,N,\eta) \ar[d, "{[-]}"'] \ar[r, "\nu'"]
& \mathcal{H}_\Theta \ar[d, "q=-1"] \ar[r, "(-)|_\lambda"]
& {\displaystyle \bigoplus_{W_\Theta u W_\lambda} \mathcal{H}_{\Theta(u,\lambda)} } \ar[d, "q=-1"]\\
K\Mod_{coh}(\mathcal{D}_\lambda,N,\eta) \ar[r, "\cong"]
& \mathbb{Z}[W_\Theta \backslash W] \ar[r, "(-)|_\lambda"]
& {\displaystyle \bigoplus_{W_\Theta u W_\lambda} \mathbb{Z}[W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda] }
\end{tikzcd}. \end{equation*}
Here $\mathbb{Z}[W_\Theta \backslash W]$ is the $\mathbb{Z}$-module with basis $\{\delta_C\}_{C \in W_\Theta \backslash W}$, and the first horizontal map at the bottom sends $[\mathcal{M}(w^C,\lambda,\eta)]$ to $\delta_C$. The modules $\mathcal{H}_{\Theta(u,\lambda)}$ and $\mathbb{Z}[W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda]$ are defined similarly but their basis elements are instead labeled by $W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda$. The map $(-)|_\lambda$ is defined on basis elements analogous to (\ref{eqn:defn_of_(-)|_lambda_intro}). Each $\mathcal{H}_{\Theta(u,\lambda)}$ is a module over the Hecke algebra $\mathcal{H}_\lambda = \mathcal{H}(W_\lambda)$ of the integral Weyl group $W_\lambda$ (as in Romanov's work in the integral case). Thus each $\alpha \in \Pi_\lambda$ defines an operator $T_\alpha^{u,\lambda}$ on $\mathcal{H}_{\Theta(u,\lambda)}$ representing the multiplication of the Kazhdan-Lusztig basis element $C_{\lambda,s_\alpha} \in \mathcal{H}_\lambda$ corresponding to the simple reflection $s_\alpha$. Romanov's main result \cite[Theorem 11]{Romanov:Whittaker}, interpreted combinatorially and applied to $\mathcal{H}_{\Theta(u,\lambda)}$, says that the operators $T_\alpha^{u,\lambda}$ inductively define a Kazhdan-Lusztig basis of $\mathcal{H}_{\Theta(u,\lambda)}$ in a similar fashion as the conditions \ref{enum:KL_basis_expansion} and \ref{enum:KL_basis_U}. More precisely, the Kazhdan-Lusztig basis $\{\psi_{u,\lambda}(F)\}_{F \in W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda}$ of $\mathcal{H}_{\Theta(u,\lambda)}$ is the unique basis over $\mathbb{Z}[q^{\pm1}]$ such that \begin{enumerate}[label=(W.\arabic*)]
\item \label{enum:WKL_basis_expansion}
$\psi_{u,\lambda}(F) = \delta_F + \sum_{G <_{u,\lambda} F} P_{FG}^{u,\lambda} \; \delta_G$ for some $P_{FG}^{u,\lambda} \in q \mathbb{Z}[q]$; and
\item \label{enum:WKL_basis_U}
if $F$ is not the shortest right coset, there exist $\alpha \in \Pi_\lambda$ and $c_G \in \mathbb{Z}$ such that $F s_\alpha <_{u,\lambda} F$ and
\begin{equation*}
T_\alpha^{u,\lambda}(\psi_{u,\lambda}(F s_\alpha)) = \sum_{G \leqslant_{u,\lambda} F} c_G \; \psi_{u,\lambda}(G)
\end{equation*} \end{enumerate}
(Definition/Theorem \ref{def:parabolic_KL_poly}). We can still formally consider an $\mathcal{H}$-module structure on $\mathcal{H}_\Theta$ as in the integral case and define operators $T_\alpha: \mathcal{H}_\Theta \to \mathcal{H}_\Theta$ for simple roots $\alpha$. When $\alpha$ is integral to $\lambda$, $T_\alpha$ is the combinatorial incarnation of the $U$-functor. It preserves the decomposition $(-)|_\lambda$ and restricts to $T_\alpha^{u,\lambda}$ on each $\mathcal{H}_{\Theta(u,\lambda)}$. When a simple root $\beta$ is non-integral, we will instead consider the endomorphism $(-) \cdot s_\beta$ on $\mathcal{H}_\Theta$ given by $\delta_C \cdot s_\beta = \delta_{C s_\beta}$, which represents the action of the intertwining functor $I_{s_\beta}$.
Here is our (slightly rephrased) algorithm.
\begin{theorem}[Kazhdan-Lusztig algorithm]
Fix a character $\eta: \mathfrak{n} \to \mathbb{C}$. For any $\lambda$ and any $C \in W_\Theta \backslash W$, write $W_\Theta u W_\lambda$ for the double coset containing $C$, where $u$ is the unique shortest element in this double coset. Then
\begin{enumerate}[label=(A.\arabic*)]
\item \label{enum:WKL_expansion}
There exist polynomials $P_{CD}^{u,\lambda} \in q\mathbb{Z}[q]$ so that
\begin{equation*}
\nu'(\mathcal{L}(w^C,\lambda,\eta)) = \nu'(\mathcal{M}(w^C,\lambda,\eta))
+ \sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda\\%
D <_{u,\lambda} C}}
P_{CD}^{u,\lambda} \; \nu' (\mathcal{M}(w^D, \lambda,\eta)).
\end{equation*}
\item \label{enum:WKL_U}
For any integral simple root $\alpha$ such that $C s_\alpha < C$, there exist integers $c_D$ depending on $C$, $D$, and $s_\alpha$, such that
\begin{equation*}
T_\alpha(\nu'(\mathcal{L}(w^{Cs_\alpha},\lambda,\eta))) =
\sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda \\ D \leqslant_{u,\lambda} C}}
c_D \;\nu'(\mathcal{L}(w^D,\lambda,\eta)).
\end{equation*}
\item \label{enum:WKL_I}
For any non-integral simple root $\beta$ such that $C s_\beta < C$,
\begin{equation*}
\nu'(\mathcal{L}(w^C,\lambda,\eta)) \cdot s_\beta = \nu'(\mathcal{L}(w^{C s_\beta}, s_\beta \lambda, \eta)).
\end{equation*}
\item \label{enum:WKL_integral_model}
$\nu'(\mathcal{L}(w^C,\lambda,\eta))|_\lambda$ is a Kazhdan-Lusztig basis element of $\mathcal{H}_{\Theta(u,\lambda)}$.
\end{enumerate} \end{theorem}
This appears as Theorem \ref{thm:KL_alg} below. The character formula \ref{thm:multiplicity_intro} follows by taking \ref{enum:WKL_expansion} and \ref{enum:WKL_integral_model} for antidominant regular $\lambda$, descending to the Grothendieck group by specializing at $q=-1$, passing through Beilinson-Bernstein localization, and applying the character map.
The proof of the algorithm is an induction on the length $\ell(w^C)$. The proofs of \ref{enum:WKL_expansion} and \ref{enum:WKL_U} are analogous to the proofs of \ref{enum:KL_basis_expansion} and \ref{enum:KL_basis_U}, respectively. \ref{enum:WKL_I} reflects the action of non-integral intertwining functor $I_{s_\beta}$. In fact, the following diagram commutes \begin{equation}
\begin{tikzcd}
\Mod_{coh}(\mathcal{D}_\lambda,N,\eta) \ar[d, "I_{s_\beta}"'] \ar[r, "\nu'"]
& \mathcal{H}_\Theta \ar[d, "(-) \cdot s_\beta"'] \ar[r, "(-)|_\lambda"]
& {\displaystyle \bigoplus \mathcal{H}_{\Theta(u,\lambda)} } \ar[d, "s_\beta \cdot (-) \cdot s_\beta"]\\
\Mod_{coh}(\mathcal{D}_{s_\beta\lambda},N,\eta) \ar[r, "\nu'"]
& \mathcal{H}_\Theta \ar[r, "(-)|_\lambda"]
& {\displaystyle \bigoplus \mathcal{H}_{\Theta(r,s_\beta \lambda)} }
\end{tikzcd} \end{equation} (Proposition \ref{lem:Is_right_coset}, Corollary \ref{lem:I_on_std}, and Proposition \ref{thm:Is_pullback}; we only prove the commutativity of this diagram for irreducible $\mathcal{D}$-modules, but extension to other modules is straightforward). The push-pull operation together with non-integral intertwining functors allows the induction argument to run. In the actual proof, one prove \ref{enum:WKL_U} and \ref{enum:WKL_I} first at each inductive step and use them two prove the remaining statements.
The remaining technical difficulty lies in the proof of \ref{enum:WKL_integral_model}. It requires us to find $\alpha \in \Pi_\lambda$ so that $C s_\alpha <_{u,\lambda} C$ and \ref{enum:WKL_basis_U} holds. If $\alpha$ can be chosen to be also simple in $\Sigma^+$, then \ref{enum:WKL_basis_U} simply follows from \ref{enum:WKL_U}. But there are examples where this cannot be done. The strategy then is to apply non-integral intertwining functors so that $\alpha$ becomes simple in both the integral Weyl group and in $W$, and that $C$ is translated to a coset of smaller length so that \ref{enum:WKL_U} holds by induction assumption. \ref{enum:WKL_basis_U} is obtained by translating \ref{enum:WKL_U} back via inverse intertwining functors. The existence of such a chain of intertwining functors is guaranteed by Lemma \ref{lem:decrease_of_length}.
In the special case $\eta = 0$, our argument gives a new proof of the non-integral Kazhdan-Lusztig conjecture for Verma modules. The non-integral intertwining functors we use do not seem to have analogue in existing approaches using perverse sheaves (for example Lusztig's proof \cite[Chapter 1]{Lusztig:Char_finite_field}) and Soergel modules (\cite{Soergel:V}). Since these functors are equivalences on the category of quasi-coherent $\mathcal{D}_\lambda$-modules, we believe they should have applications outside of the current context. We will expand on this in Remark \ref{rmk:Verma_case_old_proofs}.
\subsection{Outline of the paper}
The paper is organized as follows. In \textsection \ref{sec:prelim} we present preliminaries on Whittaker modules and their localizations. The following section \textsection\ref{sec:db_coset} is devoted to studying the structure of left $W_\lambda$-cosets and double $(W_\Theta,W_\lambda)$-cosets in the Weyl group. In \textsection \ref{sec:geom} we study the effect of non-integral intertwining functors on irreducible $\mathcal{D}$-modules. In \textsection \ref{sec:KL} we state and prove the main algorithm. The character formula is established in \textsection \ref{sec:character_formula}. Lastly, in \textsection \ref{sec:examples}, we provide an example on the $A_3$ root system.
\section{Preliminarlies}\label{sec:prelim}
In this section we fix some notations and present necessary facts on Whittaker modules and their localizations without proof.
Let us start by recalling some notations. We have fixed in \textsection\ref{subsec:main_results} a complex semisimple Lie algebra $\mathfrak{g}$ over $\mathbb{C}$, a maximal nilpotent subalgebra $\mathfrak{n}$, and a Cartan subalgebra $\mathfrak{h}$ normalizing $\mathfrak{n}$. The capital letters $G$, $N$, and $H$ denote the corresponding algebraic groups. The sets $\Sigma \supset \Sigma^+ \supseteq \Pi \supseteq \Theta$ denote the root system of $(\mathfrak{g},\mathfrak{h})$, the set of positive roots as roots in $\mathfrak{n}$, and the set of simple roots, respectively. We let $\eta: \mathfrak{n} \to \mathbb{C}$ be a Lie algebra character and let \begin{equation*}
\Theta = \{ \alpha \in \Pi \mid \eta \text{ is nonzero on the $\alpha$-root space in } \mathfrak{n} \}. \end{equation*} The half sum of positive roots is denoted by $\rho$. The Weyl group is denoted by $W$. Let $\lambda \in \mathfrak{h}^*$ and $\theta$ is the $W$-orbit of $\lambda$. A subscript $\Theta$ or $\lambda$ denotes corresponding subobjects defined by $\Theta$ or are integral to $\lambda$. The capital letters $C,D,E,F$ will denote right $W_\Theta$-cosets in $W$ or in $W_\lambda$ (except $C(w)$ will denote a Schubert cell). We will write $w^C$ for the unique longest element in $C$.
Let $\mathcal{U}(\mathfrak{g})$ be the enveloping algebra of $\mathfrak{g}$ and write $\mathcal{Z}(\mathfrak{g})$ for the center of $\mathcal{U}(\mathfrak{g})$. We write $\xi: \mathcal{Z}(\mathfrak{g}) \to \Sym(\mathfrak{h})^W$ for the Harish-Chandra isomorphism (this is the map $\gamma \circ \varphi|_{\mathcal{Z}(\mathfrak{g})}$ in \cite[Theorem 7.4.5]{Dixmier:Enveloping_Alg}). We write $\chi_\lambda = \chi_\theta$ for the composition \begin{equation*}
\mathcal{Z}(\mathfrak{g}) \xrightarrow{\xi} \Sym(\mathfrak{h})^W \hookrightarrow \Sym(\mathfrak{h}) \xrightarrow{\lambda} \mathbb{C}. \end{equation*} This only depends on the Weyl group $\theta$ of $\lambda$, and will be called an \textit{infinitesimal character}. We let $\mathcal{U}(\mathfrak{g})_\lambda = \mathcal{U}(\mathfrak{g})_\theta$ denote $\mathcal{U}(\mathfrak{g})/\langle \ker \chi_\theta \rangle$. When the Lie algebra is understood, we often write $\mathcal{U}_\theta$ for $\mathcal{U}(\mathfrak{g})_\theta$. The weight $\lambda$ is said to be \textit{regular} if $\alpha^\vee(\lambda) \neq 0$ for all roots $\alpha$, \textit{antidominant} if $\alpha^\vee(\lambda)$ is not a non-negative integer for all $\alpha \in \Sigma^+$, and \textit{integral} if $\alpha^\vee(\lambda) \in \mathbb{Z}$ for all $\alpha \in \Sigma$. We say $\chi_\theta$ is regular if $\lambda$ is.
\subsection{Preliminaries on Whittaker modules}\label{subsec:Wh_prelim}
The category of \textbf{Whittaker modules}, denoted by $\mathcal{N}$, is the full subcategory of all $\mathfrak{g}$-modules consisting of those that are finitely generated over $\mathfrak{g}$, locally finite over $\mathfrak{n}$, and locally finite over $\mathcal{Z}(\mathfrak{g})$. Here we say a module over a $\mathbb{C}$-algebra is locally finite if every element generates a finite dimensional subspace. We write $\mathcal{N}_\theta$ (resp. $\mathcal{N}_\eta$) for the full subcategory of $\mathcal{N}$ consisting of objects with infinitesimal character $\chi_\theta$ (resp. on which $\xi - \eta(\xi)$ acts locally nilpotently for all $\xi \in \mathfrak{n}$). Set $\mathcal{N}_{\theta,\eta} = \mathcal{N}_\theta \cap \mathcal{N}_\eta$. Every object of $\mathcal{N}$ has finite length (\cite[Theorem 2.7(c)]{McDowell:Whittaker}, \cite[Theorem 2.6(1)]{Milicic-Soergel:Whittaker_algebraic}). By local finiteness over $\mathcal{Z}(\mathfrak{g})$ and $\mathcal{U}(\mathfrak{n})$, $\mathcal{N}$ decomposes as a direct sum of various subcategories $\mathcal{N}_{\theta,\eta}$. So each irreducible object land in a single $\mathcal{N}_{\theta,\eta}$.
To give more description of the category $\mathcal{N}_{\theta,\eta}$, let us first consider the non-degenerate case. So assume $\eta$ is non-degenerate, i.e. $\Theta = \Pi$. Consider the cyclic module \begin{equation*}
Y_\mathfrak{g}(\lambda,\eta) := \mathcal{U}(\mathfrak{g})_\lambda \dotimes_{\mathcal{U}(\mathfrak{n})} \mathbb{C}_\eta, \end{equation*} i.e. a module generated by a single vector on which $\mathfrak{n}$ acts by $\eta$ and $\mathcal{Z}(\mathfrak{g})$ acts by $\chi_\lambda$. Kostant showed that $Y_\mathfrak{g}(\lambda,\eta)$ is irreducible, and is the unique irreducible object in the semisimple category $\mathcal{N}_{\theta,\eta}$ \cite[Theorem A]{Kostant:Whittaker} (see \cite[Theorem 5.6]{Milicic-Soergel:Whittaker_geometric} for a geometric proof). The category $\mathcal{N}_\eta$ is equivalent to the category of finite dimensional $\mathcal{Z}(\mathfrak{g})$-modules \cite[Theorem 5.9]{Milicic-Soergel:Whittaker_geometric}.
Now suppose $\eta$ is general. We can define $Y_\mathfrak{g}(\lambda,\eta)$ in the same way as above, but it will be potentially reducible. Instead, we look at the parabolic subalgebra $\mathfrak{p}_\Theta \supset \mathfrak{h} + \mathfrak{n}$ defined by $\Theta$. We take its $\ad \mathfrak{h}$-stable Levi decomposition $\mathfrak{p}_\Theta = \mathfrak{l}_\Theta + \mathfrak{u}_\Theta$, write $\mathfrak{n}_\Theta = \mathfrak{l}_\Theta \cap \mathfrak{n}$ (so that $\mathfrak{n} = \mathfrak{n}_\Theta + \mathfrak{u}_\Theta$), and let $\rho_\Theta$ for the half sum of $\mathfrak{h}$-roots in $\mathfrak{n}_\Theta$. Then the restriction $\eta|_{\mathfrak{n}_\Theta}$ is non-degenerate by construction, so the cyclic $\mathfrak{l}$-module \begin{equation*}
Y_\mathfrak{l}(\lambda-\rho+\rho_\Theta,\eta) = \mathcal{U}(\mathfrak{l})_{\lambda-\rho+\rho_\Theta} \dotimes_{\mathcal{U}(\mathfrak{n}_\Theta)} \mathbb{C}_\eta \end{equation*} is irreducible. The following definition is due to McDowell \cite[Proposition 2.4]{McDowell:Whittaker} (see also \cite[\textsection 2]{Milicic-Soergel:Whittaker_algebraic}; our notation is the closest to the one in \cite[Definition 2]{Romanov:Whittaker})
\begin{definition}\label{def:std_Wh_mods}
The \textbf{standard Whittaker module} is the module parabolically induced from $Y_\mathfrak{l}(\lambda-\rho+\rho_\Theta,\eta)$:
\begin{equation*}
M(\lambda,\eta) = \mathcal{U}(\mathfrak{g}) \dotimes_{\mathcal{U}(\mathfrak{p}_\Theta)} Y_\mathfrak{l}(\lambda-\rho+\rho_\Theta,\eta).
\end{equation*} \end{definition} When $\eta$ is non-degenerate, $M(\lambda,\eta) = Y_\mathfrak{g}(\lambda,\eta)$. When $\eta = 0$, these are just Verma modules.
The standard Whittaker modules share similar properties with Verma modules. McDowell showed that each $M(\lambda,\eta)$ is in $\mathcal{N}_{\theta,\eta}$ and admits a unique irreducible quotient $L(\lambda,\eta)$. Moreover, $M(\lambda,\eta) = M(\lambda',\eta)$ if and only if $W_\Theta \lambda = W_\Theta \lambda'$, and the same holds for irreducibles. These facts are contained in \cite{McDowell:Whittaker} Proposition 2.4, Theorem 2.5, and Theorem 2.9 and are reproved in \cite[\textsection 2]{Milicic-Soergel:Whittaker_algebraic}. In particular, if we fix an antidominant $\lambda$ and write $W^\lambda$ for the stabilizer of $\lambda$ in $W$, then standard objects and irreducible objects in $\mathcal{N}_{\theta,\eta}$ are parameterized by double cosets $W_\Theta \backslash W / W^\lambda$ where $W_\Theta z W^\lambda$ corresponds to $M(z \lambda,\eta)$ and $L(z \lambda,\eta)$. If $\lambda$ is regular, then standards and irreducibles are parameterized by $W_\Theta \backslash W$. In accordance with the geometric setup in \textsection\ref{subsec:geom_prelim}, we will write $M(w^C \lambda,\eta)$ and $L(w^C \lambda,\eta)$ for the modules parameterized by $C \in W_\Theta \backslash W$, where $w^C$ is the unique longest element in $C$.
Using the standard modules, McDowell showed that each cyclic module $Y_\mathfrak{g}$ also lands inside a single $\mathcal{N}_{\theta,\eta}$ \cite[Theorem 2.5]{McDowell:Whittaker}. Later Mili\v ci\'c-Soergel showed that in fact cyclic modules are filtered by standard modules (and is a direct sum if the infinitesimal character is regular) \cite[Corollary 2.5]{Milicic-Soergel:Whittaker_algebraic}.
By mimicking the construction for Verma modules, Romanov developed in \cite[\textsection 2.2]{Romanov:Whittaker} a character theory for $\mathcal{N}_{\theta,\eta}$. She defines a map $\ch$ on objects of $\mathcal{N}_{\theta,\eta}$ that factors through and is injective on the Grothendieck group $K \mathcal{N}_{\theta,\eta}$. The characters of standard Whittaker modules are computed explicitly in \cite[Equation (2)]{Romanov:Whittaker}. In Romanov's paper, the character theory is mainly used to match global sections of costandard $\mathcal{D}$-modules (which will be defined in \textsection\ref{subsec:geom_prelim}) with standard Whittaker modules. Although our main results are stated in terms of the character map, they are in fact statements of the Grothendieck group, and we will not use any other property of the character map.
Nevertheless, let us briefly describe the shape of this character theory. Let $\mathfrak{h}^\Theta$ be the center of $\mathfrak{l}_\Theta$, let $\mathfrak{s}_\Theta = [\mathfrak{l}_\Theta,\mathfrak{l}_\Theta]$ be the semisimple part of $\mathfrak{l}_\Theta$, and let $\mathfrak{h}_\Theta = \mathfrak{s}_\Theta \cap \mathfrak{h}$, be a Cartan in $\mathfrak{s}_\Theta$, so that $\mathfrak{h} = \mathfrak{h}_\Theta \oplus \mathfrak{h}^\Theta$. Since $\eta$ is non-degenerate on $\mathfrak{n}_\Theta = \mathfrak{s}_\Theta \cap \mathfrak{n}$, the category $\mathcal{N}(\mathfrak{s}_\Theta)_\eta$ of Whittaker modules of $\mathfrak{s}_\Theta$ with generalized $\mathfrak{n}_\Theta$-character $\eta$ is equivalent to the category of finite dimensional $\mathcal{Z}(\mathfrak{s}_\Theta)$-modules. The Grothendieck group $K \mathcal{N}(\mathfrak{s}_\Theta)_\eta$ is therefore free abelian with a basis given by maximal ideals in $\mathcal{Z}(\mathfrak{s}_\Theta)$, which is in bijection with the set of $\mathfrak{n}_\Theta$-regular dominant integral weights of $\mathfrak{h}_\Theta$. For an object $U \in \mathcal{N}(\mathfrak{s}_\Theta)_\eta$, we write $[U]$ for its class in $K \mathcal{N}(\mathfrak{s}_\Theta)_\eta$.
Any object $V$ in $\mathcal{N}_\eta$ is necessarily locally $\mathfrak{h}^\Theta$-finite. Hence $V$ can be decomposed into a direct sum of generalized $\mathfrak{h}^\Theta$-weight spaces $V^{{\mu}}$, $ \mu \in (\mathfrak{h}^\Theta)^*$. It can be shown that each one of these is an $\mathfrak{s}_\Theta$-module living in $\mathcal{N}(\mathfrak{s}_\Theta)_\eta$ \cite[Theorem 4]{Romanov:Whittaker}. The character map is defined by \begin{equation}
\ch: \operatorname{Obj} \mathcal{N}_{\theta,\eta} \xrightarrow{\;\;\;\;\;} K \mathcal{N}(\mathfrak{s}_\Theta)_\eta \dotimes_\mathbb{Z} \mathbb{Z}[[(\mathfrak{h}^\Theta)^*]],\quad
V \mapsto \sum_{ \mu \in (\mathfrak{h}^\Theta)^*} [V^{ \mu}|_{\mathfrak{s}_\Theta}] e^{ \mu}, \end{equation} where $\mathbb{Z}[[(\mathfrak{h}^\Theta)^*]]$ is the group of power series in $e^{ \mu}$, $ \mu \in (\mathfrak{h}^\Theta)^*$. The characters of standard modules are easily computed, and is a linear combination with partition functions as coefficients, similar to Verma modules. See \cite[Equation (2)]{Romanov:Whittaker} for details.
\subsection{Localization of Whittaker modules}\label{subsec:geom_prelim}
In this subsection we describe the localization framework related to Whittaker modules. References for facts below include \cite{Beilinson-Bernstein:Localization}, \cite{Jantzen_Conj}, \cite{Milicic-Soergel:Whittaker_geometric}, \cite{Milicic:Localization}, \cite{Romanov:Whittaker}.
Let $X$ be the flag variety of $\mathfrak{g}$, the variety of Borel subalgebras of $\mathfrak{g}$, with the natural $G$-action. The sheaf of ordinary (algebraic) differential operators $\mathcal{D}_X$ is the subsheaf of $\mathcal{H}om_\mathbb{C}(\mathcal{O}_X,\mathcal{O}_X)$ generated by multiplications of functions and actions of vector fields. The natural action of $G$ on the space of functions on $X$ can be differentiated, which assigns each element in $\mathfrak{g}$ a vector field on $X$, whence a map $\mathfrak{g} \to \mathcal{D}_X$.
More generally, for each $\lambda \in \mathfrak{h}^*$, Beilinson-Bernstein constructed in \cite{Beilinson-Bernstein:Localization} a twisted sheaf of differential operators $\mathcal{D}_\lambda$ on $X$ together with a map $\mathfrak{g} \to \mathcal{D}_\lambda$ that induces an isomorphism $\mathcal{U}_\theta \cong \Gamma(X,\mathcal{D}_\lambda)$ (recall that $\theta$ is the Weyl group orbit of $\lambda$). Here $\mathcal{D}_\lambda$ is a sheaf of $\mathbb{C}$-algebras that is locally isomorphic to $\mathcal{D}_X$. We use the parametrization of these sheaves as in \cite[Chapter 2 \textsection 1]{Milicic:Localization}, under which $\mathcal{D}_X = \mathcal{D}_{-\rho}$. If $\lambda$ is antidominant and regular, Beilinson and Bernstein showed that taking global sections on $X$ is an equivalence of categories \begin{equation}
\Gamma(X,-): \Mod_{qc}(\mathcal{D}_\lambda) \cong \Mod(\mathcal{U}_\theta) \end{equation} between the category of quasi-coherent $\mathcal{D}_\lambda$-modules and the category of $\mathcal{U}_\theta$-modules, and a quasi-inverse is given by the \textit{localization} functor $\mathcal{D}_\lambda \dotimes_{\mathcal{U}_\theta} -$ \cite{Beilinson-Bernstein:Localization}. If $\lambda$ is only antidominant but not regular, $\Gamma(X,-)$ is still exact, but some $\mathcal{D}_\lambda$-modules can have zero global section, and $\Gamma(X,-)$ factors through an equivalence between $\Mod(\mathcal{U}_\theta)$ and a quotient of $\Mod_{qc}(\mathcal{D}_\lambda)$. The subcategory $\mathcal{N}_{\theta,\eta}$ of $\Mod(\mathcal{U}_\theta)$ corresponds, under the above equivalence of categories, to the subcategory $\Mod_{coh}(\mathcal{D}_\lambda,N,\eta)$ consisting of \textbf{$\eta$-twisted Harish-Chandra sheaves} (or $\eta$-twisted sheaves for short). This is the full subcategory of all coherent $\mathcal{D}_\lambda$-modules consisting of those $\mathcal{V}$ such that \begin{itemize}
\item $\mathcal{V}$ is an $N$-equivariant $\mathcal{O}_X$-module,
\item the action map $\mathcal{D}_\lambda \dotimes \mathcal{V} \to \mathcal{V}$ of $\mathcal{D}_\lambda$ on $\mathcal{V}$ is $N$-equivariant, and
\item for all $n \in \mathfrak{n}$, the equation $\pi(\xi) = \mu(\xi) + \eta(\xi)$ holds in $\End_\mathbb{C}(\mathcal{V})$, where $\pi$ is the action of $\mathfrak{n}$ induced by $\mathfrak{n} \subset \mathfrak{g} \to \mathcal{D}_\lambda \mathrel{\reflectbox{$\righttoleftarrow$}} \mathcal{V}$, and $\mu$ is the action given by the differential of the $N$-equivariant structure on $\mathcal{V}$. \end{itemize}
Any $\eta$-twisted Harish-Chandra sheaf is automatically holonomic \cite[Lemma 1.1]{Milicic-Soergel:Whittaker_geometric}. Holonomic modules share very nice properties (see \cite[Definition 2.3.6]{HTT} for the definition of holonomicity and \cite[Chapter 3]{HTT} for its properties). They have finite length. They are preserved by direct images and inverse images along morphisms of smooth algebraic varieties. They admit a duality operation \begin{equation*}
\mathbb{D}: \Mod_{hol}(\mathcal{D}_\lambda) \cong \Mod_{hol}(\mathcal{D}_\lambda{}^{op}) \cong \Mod_{hol}(\mathcal{D}_{-\lambda}) \end{equation*} where $\mathcal{D}_\lambda{}^{op}$ denotes the opposite algebra of $\mathcal{D}_\lambda$ (the last equality follows from the identification $\mathcal{D}_\lambda{}^{op} \cong \mathcal{D}_{-\lambda}$, see \cite[\textsection A.2 pp.325]{Hecht-Milicic-Schmid-Wolf:Localization1} or \cite[pp.44, No.9 Example 3]{Beilinson-Bernstein:Subrep}). For a morphism $f$ between smooth varieties, we denote direct images of holonomic $\mathcal{D}$-modules by $f_+$, $f_!$ and inverse images by $f^+$, $f^!$. Here $f_+$ agrees with the definition in \cite[VI.5]{Borel:D-mods} and with $\int_f$ in \cite[\textsection 1.5]{HTT}. The functor $f_!$ is obtained by conjugating $f_+$ by holonomic duality $\mathbb{D}$, which agrees with $\int_{f!}$ in \cite[Definition 3.2.3]{HTT}. The pullback $f^!$ agrees with the one defined in \cite[VI.4]{Borel:D-mods} and with $f^\dagger$ in \cite[\textsection 1.5]{HTT}. When $f$ is a closed immersion of a smooth subvariety, $H^0 f^! \mathcal{V}$ consists of sections of $\mathcal{V}$ supported in the subvariety. The functor $Lf^+$ is a shift of $f^!$ by the relative dimension ($Lf^*$ in \cite[\textsection 1.5]{HTT}); forgetting the $\mathcal{D}$-module structures, $f^+ := H^0 Lf^+$ agrees with the usual $\mathcal{O}$-module inverse image $f^*$. All $\eta$-twisted Harish-Chandra sheaves are functorial with respect to all these operations.
Let $C(w)$, $w \in W$ be the Schubert cells (i.e. $N$-orbits) on $X$, with inclusion maps $i_w: C(w) \to X$. There exist nonzero $\eta$-twisted Harish-Chandra sheaves on $C(w)$ if and only if $w = w^C$ is the longest element in the right $W_\Theta$-coset that contains it. If this is the case, the category $\Mod_{coh}(\mathcal{D}_{C(w^C)},N,\eta)$ is semisimple, in which the unique irreducible object, denoted by $\mathcal{O}_{C(w^C)}^\eta$, has $\mathcal{O}_{C(w^C)}$ as the underlying structure of an $N$-equivariant $\mathcal{O}_{C(w^C)}$-module, but with an $\eta$-twisted $\mathcal{D}_{C(w^C)}$-action (these results are contained in \cite[\textsection 3 and \textsection 4]{Milicic-Soergel:Whittaker_geometric}). We call the $\mathcal{D}$-module direct images \begin{equation*}
\mathcal{I}(w^C,\lambda,\eta) = i_{w^C+} \mathcal{O}_{C(w^C)}^\eta,\qquad
\mathcal{M}(w^C,\lambda,\eta) = i_{w^C!} \mathcal{O}_{C(w^C)}^\eta \end{equation*} the \textbf{standard module} and the \textbf{costandard module}, respectively. The standard module $\mathcal{I}(w^C,\lambda,\eta)$ contains a unique irreducible submodule \begin{equation*}
\mathcal{L}(w^C,\lambda,\eta), \end{equation*} and $\mathcal{L}(w^C,\lambda,\eta)$ is the unique irreducible quotient of $\mathcal{M}(w^C,\lambda,\eta)$. These exhaust all irreducible objects in $\Mod_{coh}(\mathcal{D}_\lambda,N,\eta)$ (\cite[\textsection 3.4]{HTT}, \cite[\textsection 4]{Milicic-Soergel:Whittaker_geometric}) Romanov \cite[Theorem 9]{Romanov:Whittaker} showed (using the character theory she developed) that if $\lambda$ is antidominant, Beilinson-Bernstein's equivalence sends $\mathcal{M}(w^C,\lambda,\eta)$ to $M(w^C\lambda,\eta)$ and $\mathcal{L}(w^C,\lambda,\eta)$ to either $L(w^C\lambda,\eta)$ or $0$. If $\lambda$ is furthermore regular, $\mathcal{L}(w^C,\lambda,\eta)$ is always sent to $L(w^C\lambda,\eta)$. This allows us to study Whittaker modules using geometry on $X$.
In practice, we work with $\mathcal{I}(w^C,\lambda,\eta)$ rather than $\mathcal{M}(w^C,\lambda,\eta)$ because $f_+$ is more natural in the $\mathcal{D}$-module theory than $f_!$. The holonomic duality $\mathbb{D}$ sends $\mathcal{M}(w^C,\lambda,\eta)$ to $\mathcal{I}(w^C,-\lambda,\eta)$ (because $f_! = \mathbb{D} \circ f_+ \circ \mathbb{D}$) and hence sends the unique irreducible quotient $\mathcal{L}(w^C,\lambda,\eta)$ of $\mathcal{M}(w^C,\lambda,\eta)$ to the unique irreducible submodule $\mathcal{L}(w^C,-\lambda,\eta)$ of $\mathcal{I}(w^C,-\lambda,\eta)$ \cite[Proposition 3.4.3]{HTT}. So we have the following flowchart \begin{equation*}
\mathcal{N}_{\theta,\eta} \xrightarrow{\mathcal{D}_\lambda\dotimes_{\mathcal{U}_\theta}-} \Mod_{coh}(\mathcal{D}_\lambda,N,\eta) \xrightarrow{\mathbb{D}} \Mod_{coh}(\mathcal{D}_{-\lambda},N,\eta), \end{equation*} \begin{equation*}
L(w^C\lambda,\eta) \mapsto \mathcal{L}(w^C,\lambda,\eta) \mapsto \mathcal{L}(w^C,-\lambda,\eta), \end{equation*} \begin{equation*}
M(w^C\lambda,\eta) \mapsto \mathcal{M}(w^C,\lambda,\eta) \mapsto \mathcal{I}(w^C,-\lambda,\eta). \end{equation*} Because of the finite length property, the set of irreducible objects form a basis for the Grothendieck group $K \Mod_{coh}(\mathcal{D}_{-\lambda},N,\eta)$. A standard argument using pullback-pushforward adjunctions shows that the set of standard modules also form a basis for $K \Mod_{coh}(\mathcal{D}_{-\lambda},N,\eta)$. Therefore, our goal of finding coefficients of $\ch M(w^D\lambda,\eta)$ in $\ch L(w^C\lambda,\eta)$ is the same as finding the change of bases matrix from the $\mathcal{L}$ basis to the $\mathcal{I}$ basis. Of course, the special case of $\eta = 0$ corresponds to Verma modules and has already been treated by the proof of the ordinary Kazhdan-Lusztig conjecture.
\section{Double cosets in the Weyl group}\label{sec:db_coset}
In this section, we collect some known results on the integral root subsystem, and examine the structure of double $(W_\Theta, W_\lambda)$-cosets in $W$. Most results on here are either known or not hard. We include most of the proofs for completeness. We will refer to the example in \textsection\ref{sec:examples} when combinatorial objects are introduced.
In \textsection\ref{subsec:Bruhat_W/Wlambda} we define a cross-section of $W/W_\lambda$ and examine the restriction of Bruhat order to each coset. The next subsection \textsection\ref{subsec:WTheta_prelim} sets notations and collects some known facts on $W_\Theta \backslash W$. In \textsection\ref{subsec:db_coset_xsec}, we construct a cross-section $A_{\Theta,\lambda}$ of $W_\Theta \backslash W / W_\lambda$ consisting of the unique shortest elements in each double coset (Corollary \ref{thm:cross-section_db_coset}). Next, we show in \textsection\ref{subsec:int_model} that, if one looks at the partition of $W_\Theta \backslash W$ given by double cosets $W_\Theta \backslash W / W_\lambda$, then each block in this partition corresponds to a right coset in $W_\lambda$ of a parabolic subgroup of $W_\lambda$, called the ``integral model'' for this double coset. As mentioned in \textsection\ref{sec:intro}, the Whittaker Kazhdan-Lusztig polynomials for $(W_\lambda,\Pi_\lambda)$ with respect to this parabolic subgroup describe the multiplicities of Whittaker modules indexed by right $W_\Theta$-cosets in this double coset. Lastly, in \textsection\ref{subsec:lem_induction}, we prove a lemma which enables a key induction step in \textsection\ref{subsec:(4)}.
We refer th readers to \textsection\ref{sec:prelim} for the definitions of the root system theoretic objects $\Sigma$, $\Sigma^+$, $\Pi$, $W$, and their variants defined by $\Theta$ and $\lambda$.
\subsection{Left $W_\lambda$-cosets and Bruhat order}\label{subsec:Bruhat_W/Wlambda}
For any $u \in W$, define the set \begin{equation*}
\Sigma_u^+ = \{ \alpha \in \Sigma^+ \mid u \alpha \in - \Sigma^+\} = \Sigma^+ \cap (-u^{-1} \Sigma^+), \end{equation*} i.e. the set of positive roots $\alpha$ so that $u\alpha$ is not positive. Write \begin{equation*}
A_\lambda = \{ u \in W \mid \Sigma_u^+ \cap \Sigma_\lambda = \varnothing\}. \end{equation*} The following is well-known.
\begin{lemma}
The set $A_\lambda$ is a cross-section of $W/W_\lambda$. \end{lemma}
\begin{proof}
This proof is copied verbatim from Mili\v ci\'c's unpublished notes. Observe that
\begin{align*}
\Sigma_u^+ \cap \Sigma_\lambda
&= \Sigma^+ \cap (-u^{-1} \Sigma^+) \cap \Sigma_\lambda && (\text{by definition of } \Sigma_u^+)\\
&= (\Sigma^+ \cap \Sigma_\lambda) \cap (-u^{-1} \Sigma^+) && (\text{rearranging terms})\\
&= \Sigma_\lambda^+ \cap (-u^{-1} \Sigma^+) && (\text{by definition of } \Sigma_\lambda^+).
\end{align*}
Hence $\Sigma_u^+ \cap \Sigma_\lambda = \varnothing \iff \Sigma_\lambda^+ \subseteq u^{-1} \Sigma^+$, and
\begin{equation}\label{eqn:A_lambda_alt}
A_\lambda = \{u \in W \mid \Sigma_\lambda^+ \subseteq u^{-1} \Sigma^+\}.
\end{equation}
We first show that any left $W_\lambda$-coset has a representative in $A_\lambda$. Take any $w \in W$. Then $w^{-1} \Sigma^+$ is a set of positive roots in $\Sigma$. Hence $\Sigma_\lambda \cap w^{-1} \Sigma^+$ is a set of positive roots in $\Sigma_\lambda$. So there is an element $v \in W_\lambda$ such that $v(\Sigma_\lambda \cap w^{-1} \Sigma^+) = \Sigma_\lambda^+$, or equivalently $\Sigma_\lambda \cap vw^{-1} \Sigma^+ = \Sigma_\lambda^+$ (because $v \Sigma_\lambda = \Sigma_\lambda$). In particular $\Sigma_\lambda^+ \subseteq vw^{-1} \Sigma^+$, and hence $(v w^{-1}) = wv^{-1} \in A_\lambda$ by the above alternative description of $A_\lambda$. As a result $w \in A_\lambda v \subseteq A_\lambda W_\lambda$. This shows $W = A_\lambda W_\lambda$, and any left $W_\lambda$-coset has a representative in $A_\lambda$.
Now suppose $u_1,u_2 \in A_\lambda$ are in the same left $W_\lambda$-coset, i.e. there is $v \in W_\lambda$ with $u_1 = u_2 v$. This implies
\begin{align*}
\Sigma_\lambda^+
&= \Sigma_\lambda \cap u_1^{-1} \Sigma^+ && (\text{since } u_1 \in A_\lambda \text{ and because of (\ref{eqn:A_lambda_alt})})\\
&= \Sigma_\lambda \cap v^{-1} u_2^{-1} \Sigma^+ && (\text{since } u_1 = u_2 v)\\
&= v^{-1} ( \Sigma_\lambda \cap u_2^{-1} \Sigma^+) && (\text{using } v^{-1} \Sigma_\lambda = \Sigma_\lambda \text{ and factoring $v^{-1}$ out})\\
&= v^{-1} \Sigma_\lambda^+ && (\text{since } u_2 \in A_\lambda \text{ and because of (\ref{eqn:A_lambda_alt})}).
\end{align*}
Since $W_\lambda$ acts simply transitively on the set of sets of positive roots of $\Sigma_\lambda$, we have $v = 1$ and $u_1 = u_2$. Thus $A_\lambda$ is a cross-section of $W/W_\lambda$. \end{proof}
The set $\Sigma_\lambda$, $W_\lambda$ and $A_\lambda$ satisfy the following elementary properties.
\begin{lemma}\label{lem:non-int_refl_subsys}
Let $\beta$ be a simple root and let $u \in W$. Write $s_\beta$ for the reflection of $\beta$. Then
\begin{enumerate}[label=(\alph*)]
\item $u \Sigma_\lambda = \Sigma_{u \lambda}$;
\item if $u \in A_\lambda$, $u \Sigma_\lambda^+ = \Sigma_{u \lambda}^+$;
\item if $u \in A_\lambda$, $u \Pi_\lambda = \Pi_{u \lambda}$;
\item $u W_\lambda u^{-1} = W_{u \lambda}$;
\item if $s_\beta \in A_\lambda$ and $u \in A_\lambda$, $us_\beta \in A_{s_\beta\lambda}$.
\item $s_\beta \in A_\lambda$ if and only if $\beta \in \Pi-\Pi_\lambda$.
\end{enumerate} \end{lemma}
\begin{proof}
(a): for any $\alpha \in \Sigma_\lambda$, $(u \alpha)^\vee (u\lambda) = \alpha^\vee( u^{-1} u \lambda) = \alpha^\vee(\lambda) \in \mathbb{Z}$. Hence $u \alpha \in \Sigma_{u\lambda}$ and $u \Sigma_\lambda \subseteq \Sigma_{u\lambda}$ by the definition of $\Sigma_{u\lambda}$. Since both sets have the same size, equality holds.
(b): from (\ref{eqn:A_lambda_alt}), we know $u \Sigma_\lambda^+ \subseteq \Sigma^+$. Hence
\begin{equation*}
u \Sigma_\lambda^+
= u \Sigma_\lambda \cap \Sigma^+
= \Sigma_{u \lambda} \cap \Sigma^+
= \Sigma_{u \lambda}^+.
\end{equation*}
(c): elements in $\Pi_\lambda$ and $\Pi_{u\lambda}$ can be characterized by not being sums of other elements of $\Sigma_\lambda^+$ and $\Sigma_{u \lambda}^+$, respectively. Since $u: \Sigma_\lambda^+ \to \Sigma_{u \lambda}^+$ commutes with sums, it must send $\Pi_\lambda$ to $\Pi_{u \lambda}$.
(d): for any $w \in W_\lambda$,
\begin{equation*}
(u w u^{-1}) u \lambda - u \lambda
= u (w \lambda - \lambda)
\in u (\mathbb{Z} \cdot \Sigma) = \mathbb{Z} \cdot \Sigma.
\end{equation*}
Hence $u W_\lambda u^{-1} \subseteq W_{u \lambda}$ by definition of $W_{u\lambda}$. Since both sides have the same size, equality holds.
(e): observe
\begin{align*}
\Sigma_{us_\beta}^+ \cap \Sigma_{s_\beta\lambda}\
&= (\Sigma^+ \cap (-(us_\beta)^{-1} \Sigma^+)) \cap \Sigma_{s_\beta\lambda} &&(\text{by definition of } \Sigma_{us_\beta}^+)\\
&= (\Sigma^+ \cap \Sigma_{s_\beta \lambda}) \cap (-(us_\beta)^{-1} \Sigma^+) && (\text{rearranging terms})\\
&= \Sigma_{s_\beta\lambda}^+ \cap (-(us_\beta)^{-1} \Sigma^+) && (\text{by definition of } \Sigma_{s_\beta \lambda}^+)\\
&= s_\beta \Sigma_\lambda^+ \cap (-(us_\beta)^{-1} \Sigma^+) && (\text{by part (b)}).
\end{align*}
Hence
\begin{align*}
us_\beta \in A_{s_\beta \lambda}
&\iff \Sigma_{us_\beta}^+ \cap \Sigma_{s_\beta\lambda} = \varnothing && (\text{by definition of } A_{s_\beta\lambda})\\
&\iff s_\beta \Sigma_\lambda^+ \cap (-(us_\beta)^{-1} \Sigma^+) = \varnothing && (\text{by the above observation})\\
&\iff s_\beta \Sigma_\lambda^+ \subseteq (us_\beta)^{-1} \Sigma^+ = s_\beta u^{-1} \Sigma^+\\
&\iff u\Sigma_\lambda^+ \subseteq \Sigma^+ && (\text{multiplying both sides by } us_\beta)\\
&\iff u \in A_\lambda && (\text{by (\ref{eqn:A_lambda_alt})})
\end{align*}
which is true by assumption.
(f): if $s_\beta \in A_\lambda$, then since $A_\lambda$ is a cross-section of $W/W_\lambda$ and $1$ is already in $W_\lambda$, we must have $s_\beta \not\in W_\lambda$. Hence
\begin{equation*}
-\beta^\vee(\lambda)\beta = (\lambda - \beta^\vee(\lambda)\beta) - \lambda = s_\beta \lambda - \lambda \not\in \mathbb{Z} \cdot \Sigma,
\end{equation*}
and $\beta^\vee(\lambda) \not\in \mathbb{Z}$. Therefore $\beta \not\in \Pi_\lambda$. On the other direction, suppose $\beta \not\in \Pi_\lambda$. Since the only positive root moved out of $\Sigma^+$ by $s_\beta$ is $\beta$, and $\beta$ is not in $\Sigma_\lambda$, we see that $s_\beta \Sigma_\lambda^+ \subseteq \Sigma^+$. This implies $s_\beta \in A_\lambda$ by (\ref{eqn:A_lambda_alt}). \end{proof}
In particular, (c) and (d) imply that conjugation by $u \in A_\lambda$ sends simple reflections in $W_\lambda$ to simple reflections in $W_{u \lambda}$. This implies:
\begin{corollary}\label{lem:Bruhat_order_conj}
Let $u \in A_\lambda$. Then conjugation by $u$ is an isomorphism of posets
\begin{equation*}
(W_\lambda,\leqslant_\lambda) \xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}} (W_{u\lambda},\leqslant_{u\lambda}).
\end{equation*} \end{corollary}
We want to show that $A_\lambda$ consists of unique shortest elements in left cosets. We in fact have a stronger statement: left multiplication by an element in $A_\lambda$ is a map from $W_\lambda$ to $W$ that preserves the Bruhat orders.
\begin{lemma}\label{lem:Bruhat_order_pre}
Let $w$, $s_\alpha \in W_\lambda$ with $\alpha \in \Sigma_\lambda^+$, and let $u \in A_\lambda$. Suppose $s_\alpha w <_\lambda w$. Then $u s_\alpha w < u w$. \end{lemma}
\begin{proof}
Consider the projection $\mathfrak{h}^* \twoheadrightarrow \operatorname{span} \Sigma_\lambda$ along the subspace $\bigcap_{\alpha \in \Sigma_\lambda} \ker \alpha$. For an element $\mu \in \mathfrak{h}^*$, we write $\bar \mu$ for its image under this projection.
An inequality in $W$ with respect to Bruhat order can be checked by a regular antidominant integral weight. That is, if $\mu$ is such a weight in $\mathfrak{h}^*$, then $u s_\alpha w < u w$ if and only if $u s_\alpha w \mu < u w \mu$, where the second inequality means that $u w \mu - u s_\alpha w \mu$ is nonzero and is a non-negative sum of simple roots. Similarly, $s_\alpha w <_\lambda w$ if and only if $s_\alpha w \bar \mu <_\lambda w \bar \mu$.
Therefore, if we write $\nu = \mu-\bar\mu$,
\begin{align*}
s_\alpha w <_\lambda w
&\iff s_\alpha w \bar\mu <_\lambda w \bar\mu\\
&\iff s_\alpha w \bar\mu + \sum_{\alpha_i \in \Pi_\lambda} a_i \alpha_i = w \bar\mu \text{ for some } a_i \in \mathbb{Z}_{\geqslant 0} \text{ not all zero}\\
&\iff s_\alpha w \bar\mu + \nu + \sum_{\alpha_i \in \Pi_\lambda} a_i \alpha_i = w \bar\mu + \nu \text{ for some } a_i \in \mathbb{Z}_{\geqslant 0} \text{ not all zero}\\
&\iff s_\alpha w \mu + \sum_{\alpha_i \in \Pi_\lambda} a_i \alpha_i = w \mu \text{ for some } a_i \in \mathbb{Z}_{\geqslant 0} \text{ not all zero}
\end{align*}
where the last step is because $\nu$ is annihilated by all coroots in $\Sigma_\lambda^\vee$.
Applying $u$ to both sides we get
\begin{equation*}
u s_\alpha w \mu + \sum_{\alpha_i \in \Pi_\lambda} a_i u \alpha_i = u w \mu \text{ for some } a_i \geqslant 0 \text{ not all zero}
\end{equation*}
which implies $u s_\alpha w \mu < u w \mu$, by the fact that $u \alpha_i \in u \Sigma_\lambda^+ \subseteq \Sigma^+$. Thus $u s_\alpha w < uw$ as desired. \end{proof}
\begin{corollary}\label{lem:Bruhat_order}
Let $v,w \in W_\lambda$ and $v \leqslant_\lambda w$. Then for any $u \in A_\lambda$, $uv \leqslant uw$. \end{corollary}
\begin{proof}
If equality holds, then the statement is trivial. Otherwise, by the definition of Bruhat order, there exist $\alpha_1,\ldots,\alpha_k \in \Sigma_\lambda^+$ such that
\begin{equation*}
v = s_{\alpha_k} \cdots s_{\alpha_1} w
<_\lambda \cdots <_\lambda
s_{\alpha_1} w <_\lambda w.
\end{equation*}
Apply \ref{lem:Bruhat_order_pre} to each inequality, we see
\begin{equation*}
uv = u s_{\alpha_k} \cdots s_{\alpha_1} w
< \cdots <
u s_{\alpha_1} w < u w
\end{equation*}
as desired. \end{proof}
\begin{corollary}\label{lem:u_smallest}
For any $u \in A_\lambda$, $u$ is the unique shortest element in $u W_\lambda$ with respect to the restriction of Bruhat order to $u A_\lambda$. \end{corollary}
\begin{example}
In the example in \textsection\ref{sec:examples}, $A_\lambda = \{1, s_\alpha, s_\beta, s_\gamma s_\beta\}$. The left $W_\lambda$-cosets are $W_\lambda$ (whose elements are crossed-out in (\ref{diag:W(A3)})), $s_\alpha W_\lambda$, $s_\beta W_\lambda$, and $s_\gamma s_\beta W_\lambda$ (whose elements are underlined in (\ref{diag:W(A3)})). \end{example}
The next lemma is analogous to a similar statement for parabolic subgroups (Lemma \ref{lem:Theta_prelim}), which we will need in a few occasions. The proof is a standard argument using the lifting property \cite[2.2.7]{Bjorner-Brenti:Coxeter}.
\begin{lemma}\label{lem:u_in_db_coset}
Let $\alpha \in \Pi$, and $u \in A_\lambda$. Then either $s_\alpha u \in A_\lambda$, or $s_\alpha u \in u W_\lambda$. \end{lemma}
\begin{proof}
Suppose $s_\alpha u \not\in u W_\lambda$. Then $s_\alpha u$ is in a different left $W_\lambda$-coset, i.e. $s_\alpha u = rv \in r W_\lambda$ for some $v \in W_\lambda$ and $r \in A_\lambda$ with $r \neq u$. So there exists some $v \in W_\lambda$ such that $s_\alpha u = r v$. We need to show that $v = 1$.
Write $w_1 \vartriangleleft w_2$ when $w_1 < w_2$ and $\ell(w_1) = \ell(w_2) -1$. From the relation $s_\alpha u = r v$, either $rv \vartriangleleft u$ or $rv \vartriangleright u$. Also $s_\alpha uv^{-1} = r$, so either $r \vartriangleleft uv^{-1}$ or $r \vartriangleright uv^{-1}$. From \ref{lem:u_smallest}, we also know $r \leqslant rv$ and $u \leqslant uv^{-1}$. We have the following four possibilities.
\begin{enumerate}[label=(\alph*)]
\item
$\begin{tikzcd}[row sep = small, column sep = small]
r \ar[r, phantom, "\vartriangleright" description] & uv^{-1} \ar[d, phantom, "\rotatebox{-90}{$\geqslant$}" description]\\
rv \ar[u, phantom, "\rotatebox{90}{$\geqslant$}" description] & u \ar[l, phantom, "\vartriangleleft" description]
\end{tikzcd}$
is impossible since it implies $u > u$.
\item
$\begin{tikzcd}[row sep = small, column sep = small]
r \ar[r, phantom, "\vartriangleright" description] & uv^{-1} \ar[d, phantom, "\rotatebox{-90}{$\geqslant$}" description]\\
rv \ar[u, phantom, "\rotatebox{90}{$\geqslant$}" description] & u \ar[l, phantom, "\vartriangleright" description]
\end{tikzcd}$.
If $rv > r$, then from $rv > r \vartriangleright uv^{-1} \geqslant u$ we see that $\ell(rv) \geqslant \ell(u) + 2$, which violates $rv \vartriangleright u$. Therefore we must have $rv = r$ and hence $v = 1$.
\item
$\begin{tikzcd}[row sep = small, column sep = small]
r \ar[r, phantom, "\vartriangleleft" description] & uv^{-1} \ar[d, phantom, "\rotatebox{-90}{$\geqslant$}" description]\\
rv \ar[u, phantom, "\rotatebox{90}{$\geqslant$}" description] & u \ar[l, phantom, "\vartriangleleft" description]
\end{tikzcd}$.
Same argument as in (b) shows that $v = 1$.
\item
$\begin{tikzcd}[row sep = small, column sep = small]
r \ar[r, phantom, "\vartriangleleft" description] & uv^{-1} \ar[d, phantom, "\rotatebox{-90}{$\geqslant$}" description]\\
rv \ar[u, phantom, "\rotatebox{90}{$\geqslant$}" description] & u \ar[l, phantom, "\vartriangleright" description]
\end{tikzcd}$.
Let $k = \ell(uv^{-1}) - \ell(u)$. Then
\begin{align*}
\ell(rv)
&\geqslant\ell(r) \\
&= \ell(uv^{-1}) -1\\
&= \ell(u) + k-1\\
&= \ell(rv) -1 + k-1\\
&= \ell(rv) + k-2
\end{align*}
and $0 \leqslant k \leqslant 2$. If $k = 2$, then $\ell(r) = \ell(rv)$ and $v = 1$. If $k = 0$, then $\ell(u) = \ell(uv^{-1})$ and $v = 1$. Suppose $k = 1$. Applying the lifting property twice, we see $r \leqslant u$ and $u \leqslant r$. So $r = u$, contradicting our assumption for $r$. Therefore we must have $v = 1$.
\end{enumerate}
Thus $v = 1$ in all cases, as desired. \end{proof}
\subsection{Notations and preliminaries on $W_\Theta \backslash W$}\label{subsec:WTheta_prelim}
We recall some well-known facts of right $W_\Theta$-cosets and partial orders. Details of these facts can be found in \cite[Chapter 6]{Milicic:Localization}. See also \cite[\textsection 2.4 and \textsection 2.5]{Bjorner-Brenti:Coxeter} for proofs of analogous results for left $W_\Theta$-cosets.
Write $w_\Theta \in W_\Theta$ for the longest element. The set \begin{equation*}
{}^\Theta W = \{ w \in W \mid w^{-1} \Theta \subseteq - \Sigma^+\} \end{equation*} is a cross-section of $W_\Theta \backslash W$ consisting of the longest elements in each coset, and \begin{equation}\label{eqn:w_Theta_Theta_W}
w_\Theta {}^\Theta W = \{w \in W \mid w^{-1} \Theta \subseteq \Sigma^+\} \end{equation} is a cross-section consisting of the shortest elements in each coset. For a right $W_\Theta$-coset $C$, we write $w^C$ for the corresponding element in ${}^\Theta W$. The restriction of Bruhat order on the set ${}^\Theta W$ defines a partial order $\leqslant$ on $W_\Theta \backslash W$. We will use the phrase ``the length of $C$'' to refer to the length of the element $w^C$. If ${\Theta(u,\lambda)}$ is a subset of $\Pi_\lambda$ defining the parabolic subgroup $W_{\lambda,\Theta(u,\lambda)} \subseteq W_\lambda$, we write ``$\leqslant_{u,\lambda}$'' for the partial order on $W_{\lambda,{\Theta(u,\lambda)}} \backslash W_\lambda$.
\begin{example}
In the example in \textsection\ref{sec:examples} (specifically in (\ref{diag:W(A3)})), Weyl group elements are grouped together based on the partition given by $W_\Theta \backslash W$. Elements that are contained in ${}^\Theta W$ are surrounded by a shape. \end{example}
The following facts will be used throughout this section.
\begin{lemma}\label{lem:W_Theta_permutes_roots}
Any element in $W_\Theta$ permutes positive roots outside $\Sigma_\Theta^+$, that is, it permutes the set $\Sigma^+ - \Sigma_\Theta^+$. \end{lemma}
\begin{proof}
It suffices to prove the statement for a simple reflection $s_\beta \in W_\Theta$. Since $s_\beta$ permutes $\Sigma^+ - \{\pm\beta\}$ and also permutes $\Sigma - \Sigma_\Theta$, it permutes $(\Sigma^+ - \{\pm\beta\}) \cap (\Sigma - \Sigma_\Theta)$ which equals $\Sigma^+ - \Sigma_\Theta^+$. \end{proof}
\begin{lemma}\label{lem:Theta_prelim}
Let $C$ be a right $W_\Theta$-coset and $\alpha \in \Pi$. Then exactly one of the following happens.
\begin{enumerate}[label=(\alph*)]
\item $C s_\alpha > C$. In this case $w^{C s_\alpha} = w^C s_\alpha$, and for any $w \in C$, $w s_\alpha > w$.
\item $C s_\alpha = C$.
\item $C s_\alpha < C$. In this case $w^{C s_\alpha} = w^C s_\alpha$, and for any $w \in C$, $w s_\alpha < w$.
\end{enumerate}
Moreover, the identity coset $W_\Theta$ is the only right $W_\Theta$-coset $C$ such that $C s_\alpha \geqslant C$ for all $\alpha \in \Pi$. \end{lemma}
\begin{proof}
With minor modifications, the results in \textsection\ref{subsec:Bruhat_W/Wlambda} can be translated to the case where we replace $\Sigma_\lambda$ by $\Sigma_\Theta$, left $W_\lambda$-cosets by right $W_\Theta$-cosets, and $A_\lambda$ by ${}^\Theta W$. Under these replacements, Lemmas \ref{lem:Bruhat_order} and \ref{lem:u_in_db_coset} say the following:
\begin{itemize}
\item[(i)] Let $v$, $w \in W_\Theta$ and $v \leqslant w$. Then for any $C \in W_\Theta \backslash W$, $v w^C \geqslant w w^C$.
\item[(ii)] Let $\alpha \in \Pi$ and $C \in W_\Theta \backslash W$. Then either $w^C s_\alpha = w^{C s_\alpha}$ or $w^C s_\alpha \in C$.
\end{itemize}
The second case of (ii) ($w^C s_\alpha \in C$) corresponds to (b). Suppose we are in the first case of (ii), that is $w^C s_\alpha = w^{C s_\alpha}$. Then $C s_\alpha \neq C$, otherwise $w^C s_\alpha = w^{C s_\alpha} = w^C$ is impossible. Suppose $Cs_\alpha > C$, i.e. $w^{C s_\alpha} > w^C$. We want to show that $w s_\alpha > w$ for any $w \in C$.
Since $C = W_\Theta w^C$, we can write $w = v w^C$ for some unique $v \in W_\Theta$. We will do induction on $\ell(v)$. For the base case $\ell(v) = 1$, $v = s_\beta$ is simple. (i) implies $s_\beta w^{C s_\alpha} > w^{C s_\alpha}$ and $s_\beta w^C > w^C$. Combined with the assumption $w^{Cs_\alpha} > w^C$, we obtain
\begin{equation*}
\begin{tikzcd}[row sep = small, column sep = small]
s_\beta w^{C s_\alpha} \ar[r, phantom, "<" description] & w^{C s_\alpha} \ar[d, phantom, "\rotatebox{-90}{$>$}" description]\\
s_\beta w^C & w^C \ar[l, phantom, "<" description]
\end{tikzcd}.
\end{equation*}
If $s_\beta w^C s_\alpha < s_\beta w^C$, then the chain of inequalities
\begin{equation*}
\begin{tikzcd}[row sep = small, column sep = small]
s_\beta w^{C s_\alpha} & w^{C s_\alpha} \ar[d, phantom, "\rotatebox{-90}{$>$}" description]\\
s_\beta w^C \ar[u, phantom, "\rotatebox{90}{$>$}" description] & w^C \ar[l, phantom, "<" description]
\end{tikzcd}
\end{equation*}
would imply that $s_\beta w^{Cs_\alpha}$ and $w^{Cs_\alpha}$ have length difference $\geqslant 3$, which is impossible since their lengths only differ by $1$. Therefore $w s_\alpha = s_\beta w^C s_\alpha > s_\beta w^C = w$. This establishes the base case.
Now suppose $v = s_\beta r>r$ for some $s_\beta,r \in W_\Theta$, with $s_\beta$ simple. Induction hypothesis says $r w^{Cs_\alpha} > r w^C$. Invoking (i) again, we obtain the following inequalities
\begin{equation*}
\begin{tikzcd}[row sep = small, column sep = small]
s_\beta r w^{C s_\alpha} \ar[r, phantom, "<" description] & r w^{C s_\alpha} \ar[d, phantom, "\rotatebox{-90}{$>$}" description]\\
s_\beta r w^C & r w^C \ar[l, phantom, "<" description]
\end{tikzcd}.
\end{equation*}
Arguing similarly as in the base case, it is impossible to have $s_\beta r w^{C s_\alpha} < s_\beta r w^C$. Therefore $w s_\alpha = s_\beta r w^{C s_\alpha} > s_\beta r w^C = w$. This proves the additional claim in case (a). An identical argument establishes the claim in case (c).
It remains to prove the last statement. Suppose $C$ satisfies $C s_\alpha \geqslant C$ for all $\alpha \in \Pi$. Let $w \in C$ be an element of minimal length. If $w \neq 1$, then $w>1$, and there is a simple reflection $s_\alpha$ with $w s_\alpha < w$. By minimality of $w$, $w s_\alpha$ is in a different coset. This forces us to be in case (c), which contradicts $C s_\alpha \geqslant C$. So $w = 1$ and $C = W_\Theta$. \end{proof}
This immediately implies
\begin{corollary}\label{lem:Bruhat_order_rcoset}
Let $C$, $D$ be two right $W_\Theta$-cosets. Let $v \in D$, $w \in C$. If $v \leqslant w$, then $D \leqslant C$. \end{corollary}
\begin{proof}
We can choose simple roots $\alpha_i$ so that
\begin{equation*}
w = v s_{\alpha_k} \cdots s_{\alpha_2} s_{\alpha_1} \geqslant
v s_{\alpha_k} \cdots s_{\alpha_2} \geqslant \cdots \geqslant v.
\end{equation*}
By the lemma, this implies
\begin{equation*}
C \geqslant C s_{\alpha_1} \geqslant \cdots \geqslant C s_{\alpha_1} \cdots s_{\alpha_k} = D. \qedhere
\end{equation*} \end{proof}
\subsection{A cross-section of $W_\Theta \backslash W / W_\lambda$}\label{subsec:db_coset_xsec}
Define the set \begin{equation*}
A_{\Theta,\lambda} = A_\lambda \cap (w_\Theta {}^\Theta W). \end{equation*} We will show (in Corollary \ref{thm:cross-section_db_coset}) that this is a cross-section of $W_\Theta \backslash W / W_\lambda$ consisting of the unique shortest elements in each double coset. Later results, as well as the main theorem of the paper, will often be formulated using this set.
\begin{lemma}\label{lem:u_in_same_right_coset}
Let $u$, $r \in A_\lambda$. Suppose $u$ and $r$ are in the same $(W_\Theta,W_\lambda)$-coset. Then $u$ and $r$ are contained in the same right $W_\Theta$-coset. \end{lemma}
\begin{proof}
The case $u = r$ is trivial. Assume $u \neq r$. By assumption, $r = wuv^{-1}$ for some $w \in W_\Theta$ and $v \in W_\lambda$. We will do induction on $\ell(w)$.
Consider the case $\ell(w) = 1$. Then $w = s_\alpha$ for some $\alpha \in \Theta$, and $s_\alpha u = r v$. By \ref{lem:u_in_db_coset} $v = 1$. Hence $r = s_\alpha u \in W_\Theta u$ which is in the same right $W_\Theta$-coset as $u$.
Consider $\ell(w) = k > 1$. Write $w = s_\alpha w'$ for some $\alpha \in \Theta$ and some $w' \in W_\Theta$ so that $\ell(w) = \ell(w') + 1$. Then $w' u = (s_\alpha r) v$. We have two possibilities.
\begin{enumerate}[label=(\alph*)]
\item $s_\alpha r \in r W_\lambda$. Then there exists $v' \in W_\lambda$ such that $s_\alpha r = r v'$, so $w' u (v' v)^{-1} = r$. Since $\ell(w') \leqslant k-1$, by the induction assumption, $u$ and $r$ are in the same right $W_\Theta$-coset.
\item $s_\alpha r \not\in r W_\lambda$. Then by \ref{lem:u_in_db_coset}, $s_\alpha r \in A_\lambda$. From the equation $w' u v^{-1} = s_\alpha r$, $\ell(w') \leqslant k-1$ and the induction assumption, we see that $u$ and $s_\alpha r$ are in the same right $W_\Theta$-coset. Since $s_\alpha r$ and $r$ are in the same right $W_\Theta$-coset, so are $u$ and $r$. \qedhere
\end{enumerate} \end{proof}
\begin{proposition}\label{lem:unique_smallest_right_coset}
Consider any double coset $W_\Theta w W_\lambda$ in $W$.
\begin{enumerate}[label=(\alph*)]
\item $W_\Theta w W_\lambda$ contains a unique smallest right $W_\Theta$-coset $C$, in the sense that $C \leqslant C'$ for any $C' \in W_\Theta \backslash W_\Theta w W_\lambda$.
\item $A_\lambda \cap (W_\Theta w W_\lambda) \subseteq C$.
\item The unique shortest element in $C$ is in $A_\lambda$.
\end{enumerate} \end{proposition}
\begin{proof}
By \ref{lem:u_in_same_right_coset}, there exists a right $W_\Theta$-coset $C$, contained in $W_\Theta w W_\lambda$, such that $A_\lambda \cap (W_\Theta w W_\lambda) \subseteq C$. Let $y$ be the unique shortest element in $C$. Say $y \in u W_\lambda$ for some $u \in A_\lambda$. Then $u \leqslant y$ by \ref{lem:u_smallest}. If $y \neq u$, we will have $u < y$, and hence by minimality of $y$, $u$ is in a different right $W_\Theta$-coset than $y$, contradicting the construction of $C$. Hence we must have $y = u$, i.e. the unique shortest element in $C$ is in $A_\lambda$. Lastly, for any other right $W_\Theta$-coset $C'$ in $W_\Theta w W_\lambda$, let $y' $ be its unique shortest element. $y'$ is contained in one of the left $W_\lambda$-cosets, say $y' \in u' W_\lambda$ for some $u' \in A_\lambda$. Then $u' \leqslant y'$ by \ref{lem:u_smallest}. Also $u' \neq y'$ (otherwise $C' \ni y' = u' \in C$ which would imply $C = C'$). Hence $u' < y'$. Therefore $C < C'$ by \ref{lem:Bruhat_order_rcoset}. Thus $C$ is the unique smallest right $W_\Theta$-coset in $W_\Theta w W_\lambda$. \end{proof}
\begin{corollary}\label{thm:cross-section_db_coset}
The set $A_{\Theta,\lambda} = A_\lambda \cap (w_\Theta {}^\Theta W)$ is a cross-section of $W_\Theta \backslash W / W_\lambda$ consisting of the unique shortest elements in each double coset. For each $u \in A_{\Theta,\lambda}$, $W_\Theta u$ is the unique smallest right $W_\Theta$-coset in $W_\Theta u W_\lambda$. \end{corollary}
\begin{proof}
Take any double coset $W_\Theta w W_\lambda$. By \ref{lem:unique_smallest_right_coset}(c), if we take the shortest element $u$ in the smallest right $W_\Theta$-coset in this double coset, then $u \in A_\lambda$. Hence $u \in A_\lambda \cap (w_\Theta {}^\Theta W)$. On the other hand, by \ref{lem:unique_smallest_right_coset}(b), any other right $W_\Theta$-coset in $W_\Theta w W_\lambda$ has empty intersection with $A_\lambda$. Therefore $A_\lambda \cap (w_\Theta {}^\Theta W) \cap W_\Theta w W_\lambda = \{u\}$. This shows that $A_\lambda \cap (w_\Theta {}^\Theta W)$ is a cross-section. \end{proof}
\begin{example}
In the example in \textsection\ref{sec:examples}, $A_{\Theta,\lambda} = \{1, s_\gamma s_\beta\}$. There are two $(W_\Theta,W_\lambda)$-cosets: $W_\Theta W_\lambda$ and $W_\Theta s_\gamma s_\beta W_\lambda$. Elements in $W_\Theta s_\gamma s_\beta W_\lambda$ are underlined in (\ref{diag:W(A3)}). \end{example}
\subsection{Integral models}\label{subsec:int_model}
By results of the previous subsection, for each double coset $W_\Theta u W_\lambda$ one can choose $u$ to be in $A_{\Theta,\lambda}$. Then $u W_\lambda$ is contained in $W_\Theta u W_\lambda$ and it intersects with different right $W_\Theta$-cosets. It will turn out that these intersections produce a parabolic subgroup in $W_\lambda$, and the Whittaker Kazhdan-Lusztig polynomials that arise determine the coefficients in the character formula.
\begin{lemma}
Let $u \in A_{\Theta,\lambda}$. Then $\Sigma_\Theta \cap \Pi_{u\lambda}$ is a set of simple roots for the root system $\Sigma_\Theta \cap \Sigma_{u\lambda}$. \end{lemma}
The intersection $\Sigma_\Theta \cap \Sigma_{u\lambda}$ is a root system because both $\Sigma_\Theta$ and $\Sigma_{u\lambda}$ are root systems.
\begin{proof}
Let $\beta \in \Sigma_\Theta \cap \Sigma_{u\lambda}$. Write $\beta$ as a $\mathbb{Z}_{\geqslant 0}$-linear combination in terms of reflections of roots in $\Pi_{u\lambda}$. If one of the summands is from $\Pi_{u\lambda} - \Sigma_\Theta$, then writing $\beta$ as a sum of reflections of roots in $\Pi$, there is a summand that comes from $\Pi - \Theta$. This implies $\beta \not\in \Sigma_\Theta$, a contradiction. Hence $\beta$ is a sum of reflections of roots from $\Sigma_\Theta \cap \Pi_{u\lambda}$. Therefore $\Sigma_\Theta \cap \Pi_{u \lambda}$ spans $\Sigma_\Theta \cap \Sigma_{u\lambda}$. Since $\Sigma_\Theta \cap \Pi_{u\lambda}$ is a subset of simple roots in $\Sigma_{u\lambda}$, roots contained in $\Sigma_\Theta \cap \Pi_{u \lambda}$ remain simple in $\Sigma_\Theta \cap \Sigma_{u\lambda}$. Thus $\Sigma_\Theta \cap \Pi_{u \lambda}$ is a set of simple roots for $\Sigma_\Theta \cap \Sigma_{u\lambda}$. \end{proof}
\sloppy Write $W_{u\lambda, \Sigma_\Theta \cap \Pi_{u \lambda}}$ for the parabolic subgroup of $W_{u\lambda}$ corresponding to $\Sigma_\Theta \cap \Pi_{u \lambda}$. Then $W_{u\lambda, \Sigma_\Theta \cap \Pi_{u \lambda}}$ is the Weyl group of $\Sigma_\Theta \cap \Sigma_{u\lambda}$ and is a subgroup of $W_\Theta \cap W_{u\lambda}$.
\begin{proposition}\label{lem:int_Whittaker_model}
For any $u \in A_{\Theta,\lambda}$, $W_\Theta \cap W_{u\lambda} = W_{u \lambda, \Sigma_\Theta \cap \Pi_{u \lambda}}$. In particular, $W_\Theta \cap W_{u\lambda}$ is a parabolic subgroup of $W_{u \lambda}$. \end{proposition}
\begin{proof}
The subgroup $W_{u \lambda, \Sigma_\Theta \cap \Pi_{u \lambda}}$ is certainly contained in $W_\Theta \cap W_{u\lambda}$. Let $w \in W_\Theta \cap W_{u \lambda}$. Being in $W_\Theta$, $w$ permutes roots in $\Sigma_\Theta$; being in $W_{u\lambda}$, $w$ permutes roots in $\Sigma_{u\lambda}$. Hence $w$ permutes roots in $\Sigma_\Theta \cap \Sigma_{u\lambda}$, and it sends the set $\Sigma^+ \cap (\Sigma_\Theta \cap \Sigma_{u \lambda})$ of positive roots in $\Sigma_\Theta \cap \Sigma_{u \lambda}$ to another set of positive roots $w \Sigma^+ \cap (\Sigma_\Theta \cap \Sigma_{u \lambda})$. Since $W_{u\lambda, \Sigma_\Theta \cap \Pi_{u \lambda}}$ is the Weyl group of $\Sigma_\Theta \cap \Sigma_{u\lambda}$, there exists a unique element $v \in W_{u\lambda, \Sigma_\Theta \cap \Pi_{u \lambda}}$ that sends $w \Sigma^+ \cap (\Sigma_\Theta \cap \Sigma_{u \lambda})$ back to $\Sigma^+ \cap (\Sigma_\Theta \cap \Sigma_{u \lambda})$. Hence $vw$ permutes $\Sigma^+ \cap (\Sigma_\Theta \cap \Sigma_{u \lambda}) = \Sigma_{u\lambda}^+ \cap \Sigma_\Theta^+$. On the other hand, since $vw \in W_\Theta$, by \ref{lem:W_Theta_permutes_roots} it permutes $\Sigma^+ - \Sigma_\Theta^+$; $vw$ is also in $W_{u \lambda}$, so it permutes $\Sigma_{u\lambda}$. Hence, it permutes $(\Sigma^+ - \Sigma_\Theta^+) \cap \Sigma_{u\lambda} = \Sigma_{u\lambda}^+ - \Sigma_\Theta^+$. As a result, $vw$ permutes
\begin{equation*}
\big( \Sigma_{u\lambda}^+ \cap \Sigma_\Theta^+ \big) \cup \big( \Sigma_{u\lambda}^+ - \Sigma_\Theta^+ \big)
= \Sigma_{u\lambda}^+.
\end{equation*}
Since $W_{u\lambda}$ acts simply transitively on the set of sets of positive roots in $\Sigma_{u\lambda}$, we must have $vw = 1$. Therefore $w = v^{-1} \in W_{u\lambda,\Sigma_\Theta \cap \Pi_{u \lambda}}$. Thus $W_\Theta \cap W_{u\lambda} = W_{u \lambda, \Sigma_\Theta \cap \Pi_{u \lambda}}$, as desired. \end{proof}
For $u \in A_{\Theta,\lambda}$, write \begin{equation*}
\Theta(u,\lambda) = u^{-1} (\Sigma_\Theta \cap \Pi_{u\lambda}) = u^{-1} \Sigma_\Theta \cap \Pi_\lambda. \end{equation*} Since $\Sigma_\Theta \cap \Pi_{u\lambda}$ is a subset of simple roots in $\Sigma_{u\lambda}$, ${\Theta(u,\lambda)}$ is a subset of simple roots in $u^{-1} \Sigma_{u \lambda} = \Sigma_\lambda$. Write $W_{\lambda,{\Theta(u,\lambda)}}$ for the parabolic subgroup of $W_\lambda$ corresponding to ${\Theta(u,\lambda)}$.
\begin{example}
Recall that $A_{\Theta,\lambda} = \{1,s_\gamma s_\beta\}$ in the example in \textsection\ref{sec:examples}. There,
\begin{align*}
\Theta(1,\lambda) &= \{\alpha+\beta\}, & W_{\lambda,\Theta(1,\lambda)} &= \{1, s_{\alpha+\beta}\},\\
\Theta(s_\gamma s_\beta,\lambda) &= \Pi_\lambda,&
W_{\lambda,\Theta(s_\gamma s_\beta,\lambda)} &= W_\lambda,
\end{align*}
where $W_\lambda$ is the type $A_2$ Weyl group generated by $s_{\alpha+\beta}$ and $s_\gamma$. \end{example}
\begin{proposition}\label{thm:int_Whittaker_model}
Let $u \in A_{\Theta,\lambda}$. The left-multiplication-by-$u$ map
\begin{equation*}
W_\lambda \xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}} u W_\lambda
\end{equation*}
induces bijections
\begin{equation*}
\begin{array}{ccccc}
W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda &\xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}}& \big\{C \cap u W_\lambda \mid C \in W_\Theta \backslash W_\Theta u W_\lambda \big\} &\xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}}& W_\Theta \backslash W_\Theta u W_\lambda\\
W_{\lambda,\Theta(u,\lambda)} v &\mapsto& u W_{\lambda,\Theta(u,\lambda)} v = W_\Theta uv \cap u W_\lambda &\mapsto& W_\Theta uv.
\end{array}
\end{equation*}
Moreover, this map preserves the partial orders on cosets: if $C',D' \in W_{\lambda,{\Theta(u,\lambda)}} \backslash W_\lambda$ are sent to $C \cap u W_\lambda$ and $D \cap u W_\lambda$, respectively, then $D' \leqslant_{u,\lambda} C'$ implies $D \leqslant C$. \end{proposition}
\begin{proof}
Consider the smallest right $W_\Theta$-coset $W_\Theta u$ of $W_\Theta u W_\lambda$.
\begin{align*}
W_\Theta u \cap u W_\lambda
&= (W_\Theta \cap u W_\lambda u^{-1}) u\\
&= (W_\Theta \cap W_{u\lambda} )u\\
&= W_{u\lambda, \Sigma_\Theta \cap \Pi_{u \lambda}} u\\
&= W_{u\lambda, u{\Theta(u,\lambda)}} u\\
&= (u W_{\lambda,{\Theta(u,\lambda)}} u^{-1}) u\\
&= u W_{\lambda ,{\Theta(u,\lambda)}}.
\end{align*}
Hence left multiplication by $u$ sends $W_{\lambda ,{\Theta(u,\lambda)}}$ to $W_\Theta u \cap u W_\lambda$. Since left multiplication by $u$ commutes with right multiplication by elements of $W_\lambda$, it sends right $W_{\lambda,{\Theta(u,\lambda)}}$-cosets in $W_\lambda$ to right $W_\lambda$-translates of $W_\Theta u \cap u W_\lambda$, which gives us $C \cap u W_\lambda$ for various right $W_\Theta$-cosets $C$ in $W_\Theta u W_\lambda$. Moreover, any right $W_\Theta$-coset $C$ in $W_\Theta u W_\lambda$ is obtained as a right $W_\lambda$-translation of $W_\Theta u$, hence the intersection $C \cap u W_\lambda$ is necessarily the image of a right $W_{\lambda,{\Theta(u,\lambda)}}$-coset.
To show that this map is order preserving, take two right $W_{\lambda,{\Theta(u,\lambda)}}$-cosets $C'$ and $D'$ such that $D' \leqslant_{u,\lambda} C'$. This means that the $\leqslant_\lambda$-longest elements $v^{D'}$, $v^{C'}$ of $D'$ and $C'$ satisfy $v^{D'} \leqslant_\lambda v^{C'}$. Since left multiplication by $u$ preserves Bruhat orders (Corollary \ref{lem:Bruhat_order}), $u v^{D'} \leqslant u v^{C'}$. Therefore $D \leqslant C$ by \ref{lem:Bruhat_order_rcoset}. \end{proof}
If we take the union of the maps in Proposition \ref{thm:int_Whittaker_model} for each $u$, we obtain the following bijection.
\begin{corollary}\label{lem:right_coset_partition}
As $u$ ranges over $A_{\Theta,\lambda}$, left multiplication by $W_\Theta u$ defines a bijection
\begin{equation*}
\ind_\lambda: \bigcup_{u \in A_{\Theta,\lambda}} W_{\lambda, {\Theta(u,\lambda)}} \backslash W_\lambda \xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}} W_\Theta \backslash W,\quad
W_{\lambda, {\Theta(u,\lambda)}} v \mapsto W_\Theta uv
\end{equation*}
which is order-preserving when restricted to each $W_{\lambda, {\Theta(u,\lambda)}} \backslash W_\lambda$ and commutes with right multiplication by $W_\lambda$. The image of $W_{\lambda, {\Theta(u,\lambda)}} \backslash W_\lambda$ is equal to $W_\Theta \backslash W_\Theta u W_\lambda$. \end{corollary}
\begin{notation}\label{not:right_coset_partition}
We will write
\begin{equation*}
(-)|_\lambda: W_\Theta \backslash W \to \bigcup_{u \in A_{\Theta,\lambda}} W_{\lambda, {\Theta(u,\lambda)}} \backslash W_\lambda
\end{equation*}
for the inverse map. If $C$ and $D$ are both sent to $W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda$, we will write $C \leqslant_{u,\lambda} D$ for $C|_\lambda \leqslant_{u,\lambda} D|_\lambda$. \end{notation}
The map $(-)|_\lambda$ plays an important role towards our goal. As explained in the introduction, standard and irreducible Whittaker modules in $\mathcal{N}_{\theta,\eta}$ are parameterized by $W_\Theta \backslash W$, but compared to the integral case, $\mathcal{N}_{\theta,\eta}$ is divided into smaller blocks. The map $(-)|_\lambda$ reflects this division: on the level of standard/irreducible modules, modules that correspond to various cosets $C$ in the same $(W_\Theta,W_\lambda)$-coset are in the same block, and each block looks like an integral Whittaker category (at least on the level of standard and irreducible modules) modeled by $W_{\lambda, {\Theta(u,\lambda)}} \backslash W_\lambda$.
\begin{example}
Let us look at the double coset $W_\Theta W_\lambda$ in the example in \textsection\ref{sec:examples}. There are three right $W_\Theta$-cosets in $W_\Theta W_\lambda$. The longest elements in these cosets are $s_\alpha s_\beta s_\alpha$, $s_\alpha s_\beta s_\alpha s_\gamma$, and $w_0$, respectively. In $W_\lambda$, there are also three right $W_{\lambda,\Theta(1,\lambda)}$-cosets whose longest elements are $s_{\alpha+\beta}$, $s_{\alpha+\beta}s_\gamma$, and $s_{\alpha+\beta+\gamma}$, respectively. The map
\begin{equation*}
(-)|_\lambda: W_\Theta \backslash W_\Theta W_\lambda \xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}} W_{\lambda,\Theta(1,\lambda)} \backslash W_\lambda
\end{equation*}
is visualized in (\ref{diag:(-)|_lambda_A3}). \end{example}
We also need to understand how $(-)_\lambda$ behaves under right multiplication by a non-integral simple reflection. This reflects the effect of non-integral intertwining functors which will be defined in \textsection\ref{sec:geom} and will be used in the algorithm. Roughly speaking, right multiplication by a non-integral simple reflection translates $(W_\Theta,W_\lambda)$-coset structures to $(W_\Theta,W_{s_\beta\lambda})$-coset structures, while conjugation by the same reflection translates right $W_{\lambda,\Theta(u,\lambda)}$-coset structures in $W_\lambda$ to $W_{s_\beta \lambda, \Theta(r,s_\beta \lambda)}$-coset structures in $W_{s_\beta \lambda}$.
\begin{lemma}\label{lem:Is_pres_lowest_db_coset}
Let $u \in A_{\Theta,\lambda}$, $\beta \in \Pi - \Pi_\lambda$. Then $W_\Theta u s_\beta$ is the smallest right $W_\Theta$-coset in $W_\Theta u s_\beta W_{s_\beta \lambda} = W_\Theta u W_\lambda s_\beta$. \end{lemma}
\begin{proof}
By Lemma \ref{lem:non-int_refl_subsys}(e)(f), $u s_\beta \in A_{s_\beta \lambda}$. Proposition \ref{lem:unique_smallest_right_coset} says that elements in $A_{s_\beta\lambda}$ are concentrated on the smallest right $W_\Theta$-cosets. So the right coset $W_\Theta (u s_\beta)$ containing $u s_\beta$ must be the smallest in the double coset $W_\Theta( u s_\beta) W_{s_\beta \lambda}$ containing $u s_\beta$. This proves the lemma. The final identification simply follows from $s_\beta W_{s_\beta \lambda} s_\beta = W_\lambda$ by \ref{lem:non-int_refl_subsys}(d). \end{proof}
Rephrasing slightly and using \ref{lem:unique_smallest_right_coset} again, we get
\begin{corollary}
Let $u \in A_{\Theta,\lambda}$, $\beta \in \Pi - \Pi_\lambda$. If $r$ denotes the unique element in $ A_{\Theta,s_\beta \lambda} \cap W_\Theta us_\beta W_{s_\beta \lambda}$, then $W_\Theta r = W_\Theta u s_\beta$. \end{corollary}
\begin{proposition}\label{lem:Is_right_coset}
Let $u \in A_{\Theta,\lambda}$, $\beta \in \Pi - \Pi_\lambda$. Let $r$ be the unique element in $ A_{\Theta,s_\beta \lambda} \cap W_\Theta u s_\beta W_{s_\beta \lambda}$. Then conjugation by $s_\beta$ is a bijection
\begin{equation*}
s_\beta (-) s_\beta : W_{s_\beta \lambda, \Theta(r,s_\beta \lambda)} \backslash W_{s_\beta \lambda} \xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}}
W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda
\end{equation*}
that preserves the partial orders on right cosets. Moreover, the following diagram commutes
\begin{equation}\label{eqn:Is_right_coset}
\begin{tikzcd}
W_{s_\beta \lambda, \Theta(r,s_\beta \lambda)} \backslash W_{s_\beta \lambda} \ar[d,"\ind_{s_\beta\lambda}"'] \ar[r,"s_\beta (-) s_\beta"] &
W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda \ar[d,"\ind_\lambda"]\\
W_\Theta \backslash W \ar[r," (-) s_\beta"] & W_\Theta \backslash W
\end{tikzcd}.
\end{equation}
In particular, for any $C$, $D \in W_\Theta \backslash W_\Theta r W_{s_\beta \lambda}$,
\begin{equation*}
D \leqslant_{r,s_\beta \lambda} C \iff D s_\beta \leqslant_{u,\lambda} C s_\beta.
\end{equation*} \end{proposition}
\begin{proof}
By \ref{lem:Bruhat_order_conj}, conjugation by $s_\beta$ is an isomorphism of groups and posets between $(W_{s_\beta \lambda},\leqslant_{s_\beta\lambda})$ and $(W_\lambda,\leqslant_\lambda)$. By the preceding corollary, there exists $w \in W_\Theta$ such that $w r = u s_\beta$. Therefore
\begin{align*}
s_\beta \Theta(u,\lambda)
&= s_\beta (u^{-1} \Sigma_\Theta \cap \Pi_\lambda)\\
&= (us_\beta)^{-1} \Sigma_\Theta \cap s_\beta \Pi_\lambda\\
&= (wr)^{-1} \Sigma_\Theta \cap \Pi_{s_\beta \lambda}\\
&= r{}^{-1} (w^{-1} \Sigma_\Theta) \cap \Pi_{s_\beta \lambda}\\
&= r{}^{-1} \Sigma_\Theta \cap \Pi_{s_\beta \lambda}\\
&= \Theta(r,s_\beta \lambda).
\end{align*}
Hence conjugation by $s_\beta$ sends $W_{s_\beta \lambda, \Theta(r,s_\beta \lambda)}$ to $W_{\lambda,\Theta(u,\lambda)}$ and therefore induces a bijection from $W_{s_\beta \lambda,\Theta(r,s_\beta \lambda)} \backslash W_{s_\beta \lambda}$ to $W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda$. Furthermore, since conjugation by $s_\beta$ preserves Bruhat orders, it also preserves the partial orders on right cosets.
To check that the diagram commutes, take any $D' \in W_{s_\beta \lambda, \Theta(r,s_\beta \lambda)} \backslash W_{s_\beta \lambda}$. Along the top-right path, $D'$ is sent to
\begin{equation*}
W_\Theta u \cdot s_\beta D' s_\beta = W_\Theta w r D' s_\beta = W_\Theta r D' s_\beta,
\end{equation*}
which agrees with the image along the bottom-left path. \end{proof}
\subsection{A technical lemma}\label{subsec:lem_induction}
In the last part of this section, we prove a technical lemma that will be used in \textsection\ref{subsec:(4)} in induction process.
\begin{proposition}\label{lem:decrease_of_length}
Let $u \in A_{\Theta,\lambda}$ and $C \in W_\Theta \backslash W_\Theta u W_\lambda$. Suppose $C \neq W_\Theta u$. Then there exist $\alpha \in \Pi_\lambda$, $s \geqslant 0$ and $\beta_1,\ldots,\beta_s \in \Pi$ such that, writing $z_0 = 1$, $z_i = s_{\beta_1} \cdots s_{\beta_i}$ and $z = z_s$, the following conditions hold:
\begin{enumerate}[label=(\alph*)]
\item for any $0 \leqslant i \leqslant s-1$, $\beta_{i+1}$ is non-integral to $z_i^{-1} \lambda$;
\item $z^{-1} \alpha \in \Pi \cap \Pi_{z^{-1} \lambda}$;
\item $C s_\alpha <_{u,\lambda} C$;
\item if $s > 0$, $Cz < C$;
\item $C s_\alpha z = C z s_{z^{-1} \alpha} < C z$.
\end{enumerate} \end{proposition}
This proposition is used in showing that the $q$-polynomials defined geometrically (by taking higher inverse images to Schubert cells; see \textsection\ref{subsec:geom_idea} for an explanation of the idea) agree with the Whittaker Kazhdan-Lusztig polynomials for the triple $(W_\lambda, \Pi_\lambda, \Theta(u,\lambda))$. This is a proof by induction in the length of $C$. As mentioned in \textsection\ref{subsec:geom_idea} (specifically the condition \ref{enum:WKL_basis_U}), the Kazhdan-Lusztig basis elements $C_C = \psi_{u,\lambda}(C|_\lambda)$ of $\mathcal{H}_{\Theta(u,\lambda)}$ are partly characterized by properties of the product $C_C C_s = T_\alpha^{u,\lambda}(\psi_{u,\lambda}(C|_\lambda))$. If the simple reflection $s \in W_\lambda$ happens to be simple in $W$, then multiplication by $C_s$ on $C_C$ lifts to the geometric $U$-functor (push-pull along $X \to X_s$). However, if $s$ is not simple in $W$, no such $U$-functor exists. The strategy in this situation is to use non-integral intertwining functors to translate everything so that $s$ becomes simple in both the integral Weyl group and in $W$. On the $W_\lambda$ level, these non-integral intertwining functors correspond to applying conjugations $s_{\beta_i}(-)s_{\beta_i}$ by non-integral simple reflections so that $s \in W_\lambda$ is translated to $(s_{\beta_1} \cdots s_{\beta_s})^{-1} s (s_{\beta_1} \cdots s_{\beta_s})$ which is simple in $W_{s_{\beta_s} \cdots s_{\beta_1} \lambda}$. On the $W$ level, they correspond to right multiplication on $C$ by $s_{\beta_1} \cdots s_{\beta_s}$. Also, one needs to ensures that the length of $C$ decreases after these non-integral reflections in order to apply the induction hypothesis on $C$. The existence of such a chain of non-integral reflections is guaranteed by the proposition.
\begin{proof}
Since $C \neq W_\Theta u$, in particular $C \neq W_\Theta$, there exists a simple reflection $s_\gamma$ such that $C s_\gamma < C$.
If there exists $\alpha \in \Pi \cap \Pi_\lambda$ such that $C s_\alpha < C$, then this $\alpha$ together with $s = 0$ satisfies the requirement: (a) and (d) are void, while (b) and (e) are true by construction. We need to verify (c). Since $s_\alpha$ is simple in $(W_\lambda, \Pi_\lambda)$, we have three mutually exclusive possibilities: $C s_\alpha) <_{u,\lambda} C$, $C s_\alpha = C$, or $C s_\alpha >_{u,\lambda} C$. Since the map $\ind_\lambda$ preserves the partial order, they imply $C s_\alpha < C$, $C s_\alpha = C$ and $C s_\alpha > C$, respectively. By our choice of $\alpha$, the last two possibilities cannot happen. Hence we must have $C s_\alpha <_{u,\lambda} C$ and (c) holds.
Suppose such $\alpha$ does not exist. Then any simple reflection that decreases the length of $C$ via right multiplication must be non-integral to $\lambda$. Let $s_{\beta_1}$, $\beta_1 \in \Pi - \Pi_\lambda$, be one of those. If there exists $\alpha' \in \Pi \cap \Pi_{s_{\beta_1} \lambda}$ with $C s_{\beta_1} s_{\alpha'} < C s_{\beta_1}$, we claim that $\alpha := s_{\beta_1} \alpha' \in s_{\beta_1} \Pi_{s_{\beta_1} \lambda} = \Pi_\lambda$, $s = 1$ and $\beta_1$ satisfy our requirements. (a) and (d) follows by our choice of $s_{\beta_1}$, (e) follows from the conditions on $\alpha'$. For (b),
\begin{equation*}
z^{-1} \alpha = s_{\beta_1} s_{\beta_1} \alpha' = \alpha' \in \Pi \cap \Pi_{s_{\beta_1} \lambda}
\end{equation*}
by definition of $z$ and $\alpha'$. For (c), arguing in the same way, we only need to rule out $C s_\alpha \geqslant C$, which would imply $\ell(C) - \ell(C s_\alpha s_{\beta_1}) \in \{-2,-1,0,1\}$. On the other hand,
\begin{align*}
C > C s_{\beta_1} > C s_{\beta_1} s_{\alpha'}
= C s_{\beta_1} s_{(s_{\beta_1} \alpha)}
= C s_{\beta_1} s_{\beta_1} s_\alpha s_{\beta_1}
= C s_\alpha s_{\beta_1}.
\end{align*}
So $\ell(C) - \ell(C s_\alpha s_{\beta_1}) \geqslant 2$ and (c) holds.
If such $\alpha'$ does not exist, then we can find $\beta_2,\ldots,\beta_s \in \Pi$ such that $C z_{i+1} < C z_i$ for all $1 \leqslant i \leqslant s-1$ until we get to a point where there exists $\alpha'' \in \Pi \cap \Pi_{z^{-1} \lambda}$ with $C z s_{\alpha''} < C z$ (termination of this process is proven in the next paragraph). We claim that $\alpha := z\alpha'' \in z \Pi_{z^{-1} \lambda} = \Pi_\lambda$, $s$ and $\beta_1,\ldots,\beta_s$ satisfy our requirements. The verification is essentially the same as in the previous case. (a), (b), (d) and (e) are satisfied by our choice of $\beta_i$ and $\alpha''$. For (c), we have an inequality
\begin{equation}\label{eqn:decrease_of_length_step_a}
C z > C z s_{\alpha''} = C z s_{z^{-1} \alpha} = C z z^{-1} s_\alpha z = C s_\alpha z
\end{equation}
where $\ell(w^{Cz}) = \ell(w^C z) = \ell(w^C) - s$. Also $w^{C z s_{\alpha''}} = w^C z s_{\alpha''} = w^C s_\alpha z$. Hence
\begin{equation*}
\ell(w^C s_\alpha)
= \ell(w^C s_\alpha z z^{-1})
= \ell(w^{C s_\alpha z} z^{-1})
\leqslant \ell(w^{C s_\alpha z}) +s
= \ell(w^{Cz}) -1 +s
= \ell(w^C) -1 < \ell(w^C).
\end{equation*}
This rules out $C s_\alpha \geqslant C$ and (c) is thus verified.
Lastly, let us show that this process of finding $\alpha''$ must terminate no later than when we get to $\ell(w^{Cz}) = \ell(w_\Theta)+1$. That is, we show that when $\ell(w^{Cz}) = \ell(w_\Theta)+1$, such an $\alpha''$ must exist. The condition $\ell(w^{Cz}) = \ell(w_\Theta)+1$ implies $C z = W_\Theta s_\gamma > W_\Theta$ for some simple reflection $s_\gamma$. If $\gamma \in \Pi - \Pi_{z^{-1} \lambda}$, then $s_\gamma \in A_{z^{-1} \lambda}$. Also, since $W_\Theta s_\gamma > W_\Theta$, any element of $W_\Theta s_\gamma$ must have length $\geqslant 1$. Hence $s_\gamma$ is the shortest element of $W_\Theta s_\gamma$, i.e. $s_\gamma \in w_\Theta {}^\Theta W$. Therefore $s_\gamma \in A_{z^{-1} \lambda} \cap (w_\Theta {}^\Theta W) = A_{\Theta,z^{-1} \lambda}$. Since $C = W_\Theta s_\gamma z^{-1}$, by (repeatedly applying) \ref{lem:Is_pres_lowest_db_coset}, we see that $C$ is the smallest right $W_\Theta$-coset in the $(W_\Theta,W_\lambda)$-coset containing it, that is, $C = W_\Theta u$. This contradicts our assumption on $C$. Therefore $\gamma \in \Pi \cap \Pi_\lambda$, and $\alpha'' = \gamma$ satisfies our requirement for $\alpha''$. Thus the process terminates. \end{proof}
\section{Non-integral intertwining functors}\label{sec:geom}
In this section, we give the definition of non-integral intertwining functors and show that they translate the Kazhdan-Lusztig polynomials for our Whittaker modules. Readers can review \textsection\ref{subsec:geom_prelim} for the basic geometric setup and related notations. In the rest of the paper, we will use facts about $\mathcal{D}$-modules without citing references, including the distinguished triangle for immersions of a smooth closed subvariety and its complement (also known as the distinguished triangle for local cohomology), the base change theorem for $\mathcal{D}$-modules, and Kashiwara's equivalence of categories for closed immersions. These facts are contained in \cite{Borel:D-mods}, IV.8.3, 8.4 and 7.11, respectively.
For any $w \in W$, let $Z_w$ denote the subset of $X \times X$ consists of pairs $(x,y)$ such that $\mathfrak{b}_x$ and $\mathfrak{b}_y$ are in relative position $w$. This means that for any common Cartan subalgebra $\mathfrak{c}$ and any representative of $w$ in $N_G(\mathfrak{c})$ (also denoted by $w$), $\mathfrak{b}_x = \Ad w(\mathfrak{b}_y)$. If $w$ is fixed, we write \begin{equation*}
X \xleftarrow{p_1} Z_w \xrightarrow{p_2} X \end{equation*} for the two projections. For an integral weight $\mu \in \mathfrak{h}^*$, write $\mathcal{O}_X(\mu)$ for the $G$-equivariant line bundle on $X$ where the $\mathfrak{b}$-action on the geometric fiber at $x_\mathfrak{b} \in X$ (the point on $X$ that corresponds to $\mathfrak{b}$) is given by $\mu$.
\begin{definition}
For $w \in W$ and $\lambda \in \mathfrak{h}^*$, the \textbf{intertwining functor} $LI_w$ is defined to be
\begin{align*}
LI_w : D^b(\mathcal{D}_\lambda) &\to D^b(\mathcal{D}_{w\lambda}),\notag\\
\mathcal{F}^\bullet &\mapsto p_{1+} \big( p_1^* \mathcal{O}_X(\rho-w\rho) \dotimes_{\mathcal{O}_{Z_w}} p_2^+ \mathcal{F}^\bullet \big)\notag\\
&\qquad \cong \mathcal{O}_X(\rho-w\rho) \dotimes_{\mathcal{O}_X} p_{1+} p_2^+ \mathcal{F}^\bullet.
\end{align*}
Write $I_w$ for $H^0 LI_w$. It is shown in \cite[L.3]{Milicic:Localization} that $LI_w$ is the left derived functor of $I_w$. \end{definition}
For properties of intertwining functors readers can refer to \textit{loc. cit.} The main property we will use is
\begin{theorem}[{\cite[Chapter 3 Corollary 3.22]{Milicic:Localization}}]\label{lem:non-int_I}
If $\beta \in \Pi - \Pi_\lambda$, then $I_{s_\beta}$ is an equivalence of categories
\begin{equation*}
I_{s_\beta} : \Mod_{qc}(\mathcal{D}_\lambda) \cong \Mod_{qc}(\mathcal{D}_{s_\beta\lambda})
\end{equation*}
whose quasi-inverse is $I_{s_\beta}$. \end{theorem}
To use these functors for our purpose, we need to compute the action of intertwining functors on standard and irreducible modules. Romanov computed the following result for $Cs_\beta > C$. The main ingredients of the proof there are base change formula and projection formula for $\mathcal{D}$-modules.
\begin{proposition}[{\cite[\textsection 3.4 Proposition 5]{Romanov:Whittaker}}]\label{lem:I_moves_std_general}
Let $\beta \in \Pi$ and $C \in W_\Theta \backslash W$ such that $C s_\beta > C$. Then for any $\lambda \in \mathfrak{h}^*$,
\begin{equation*}
LI_{s_\beta} \mathcal{I}(w^C,\lambda,\eta) = \mathcal{I}(w^C s_\beta, s_\beta \lambda,\eta).
\end{equation*} \end{proposition}
Combined with \ref{lem:non-int_I} we get
\begin{corollary}\label{lem:I_moves_std}
Let $\beta \in \Pi - \Pi_\lambda$ and $C \in W_\Theta \backslash W$ such that $C s_\beta \neq C$. Then
\begin{equation*}
I_{s_\beta} \mathcal{I}(w^C,\lambda,\eta) = \mathcal{I}(w^C s_\beta, s_\beta \lambda, \eta).
\end{equation*} \end{corollary}
\begin{proof}
Suppose $C s_\beta > C$, then the statement follows from \ref{lem:I_moves_std_general}. But since $I_{s_\beta}$ is an equivalence of categories with inverse $I_{s_\beta}$,
\begin{equation*}
\mathcal{I}(w^C,\lambda,\eta) = I_{s_\beta} \mathcal{I}(w^C s_\beta, s_\beta \lambda, \eta).
\end{equation*} \end{proof}
It remains to consider the case $C s_\beta = C$.
For a simple root $\beta$, write $X_\beta$ for the partial flag variety of type $\beta$, and write $p_\beta: X \to X_\beta$ for the natural projection. This is a Zariski-local $\mathbb{A}^1$-fibration. $x$ and $y$ are contained in the same $p_\beta$-fiber (i.e. $p_\beta(x) = p_\beta(y)$) if and only if $\mathfrak{b}_x$ and $\mathfrak{b}_y$ are in relative position $1$ or $s_\beta$.
\begin{lemma}\label{lem:S}
Let $C \in W_\Theta \backslash W$ and $\beta \in \Pi$ such that $C s_\beta = C$. Set
\begin{equation*}
S = \{ (x,y) \in C(w^C) \times C(w^C) \mid \mathfrak{b}_x \text{ and } \mathfrak{b}_y \text{ are in relative position } s_\beta\} \subset Z_{s_\beta}.
\end{equation*}
Write $C(w^C) \xleftarrow{p_1|_S} S \xrightarrow{p_2|_S} C(w^C)$ for the projections. Then
\begin{equation*}
(p_1|_S)_+ (p_2|_S)^+ \mathcal{O}_{C(w^C)}^\eta = \mathcal{O}_{C(w^C)}^\eta.
\end{equation*} \end{lemma}
\begin{proof}
For convenience, write $w = w^C$, $p_1 = p_1|_S$ and $p_2 = p_2|_S$. Set
\begin{equation*}
S' = C(w) \times_{p_\beta(C(w))} C(w) = \{ (x,y) \in C(w) \times C(w) \mid p_\beta(x) = p_\beta(y) \}.
\end{equation*}
Then $S \subset S \cup \Delta_{C(w)} = S' \subset Z_{s_\beta}$, where $\Delta_{C(w)}$ denotes the diagonal. Write $C(w) \xleftarrow{q_1} S' \xrightarrow{q_2} C(w)$ for the projections, and $\Delta_{C(w)} \xrightarrow{i_\Delta} S' \xleftarrow{i_S} S$ for the inclusions. Then $i_\Delta$ is a closed immersion with relative dimension $1$, and $i_S$ is open. We have the following diagram
\begin{equation}\label{diag:S}
\begin{tikzcd}
S \ar[dr, "i_S", crossing over] \ar[ddr, bend right, "p_1"'] \ar[rrd, bend left, "p_2"] &[-5ex] \\[-2ex]
& S' \ar[d, "q_1"] \ar[r, "q_2"] & C(w) \ar[d, "p_\beta"]\\
& C(w) \ar[r, "p_\beta"] & p_\beta(C(w))
\end{tikzcd}
\end{equation}
where the bottom-right square is Cartesian.
Applying the triangle for local cohomology to $q_2^+ \mathcal{O}_{C(w)}^\eta$, we get
\begin{equation*}
\dtri{ i_{\Delta+} i_\Delta^! q_2^+ \mathcal{O}_{C(w)}^\eta }
{ q_2^+ \mathcal{O}_{C(w)}^\eta }
{ i_{S+} i_S^+ q_2^+ \mathcal{O}_{C(w)}^\eta}.
\end{equation*}
Applying $q_{1+}$, we get
\begin{equation*}
\dtri{ q_{1+} i_{\Delta+} i_\Delta^! q_2^+ \mathcal{O}_{C(w)}^\eta }
{ q_{1+} q_2^+ \mathcal{O}_{C(w)}^\eta }
{ q_{1+} i_{S+} i_S^+ q_2^+ \mathcal{O}_{C(w)}^\eta}.
\end{equation*}
Applying base change to the bottom-right square in (\ref{diag:S}), $q_{1+} q_2^+ \mathcal{O}_{C(w)} \cong p_\beta^+ p_{\beta+} \mathcal{O}_{C(w)}^\eta$. Here $p_{\beta+} \mathcal{O}_{C(w)}^\eta$ is an $\eta$-twisted Harish-Chandra sheaf on $p_\beta(C(w))$. But $p_\beta(C(w))$ is isomorphic to $C(w s_\beta)$ as an $N$-variety via $p_\beta$, and since $w s_\beta$ is not the longest element in $W_\Theta w s_\beta = W_\Theta w$, we know there is no $\eta$-twisted Harish-Chandra sheaf on $C(w s_\beta)$ except $0$. Hence $p_{\beta+} \mathcal{O}_{C(w)}^\eta = 0$ and thus $q_{1+} q_2^+ \mathcal{O}_{C(w)} = 0$. As a result,
\begin{equation*}
q_{1+} i_{S+} i_S^+ q_2^+ \mathcal{O}_{C(w)}^\eta = q_{1+} i_{\Delta+} i_\Delta^! q_2^+ \mathcal{O}_{C(w)}^\eta [1].
\end{equation*}
The left side simplifies to $p_{1+} p_2^+ \mathcal{O}_{C(w)}^\eta$. For the right side, $q_{1+} i_{\Delta+} = (q_1 \circ i_\Delta)_+$ and $q_1 \circ i_\Delta$ is the projection $\Delta_{C(w)} \to C(w)$ along the first coordinate which is an $N$-equivariant isomorphism. Moreover,
\begin{align*}
i_\Delta^! q_2^+ \mathcal{O}_{C(w)}^\eta [1]
&= i_\Delta^+ q_2^+ \mathcal{O}_{C(w)}^\eta [1] [-1]\\
&= (q_2 \circ i_\Delta)^+ \mathcal{O}_{C(w)}^\eta,
\end{align*}
and $q_2 \circ i_\Delta$ is the projection $\Delta_{C(w)} \to C(w)$ along the second coordinate, also an $N$-equivariant isomorphism. Thus
\begin{equation*}
p_{1+} p_2^+ \mathcal{O}_{C(w)}^\eta
= (q_1 \circ i_\Delta)_+ (q_2 \circ i_\Delta)^+ \mathcal{O}_{C(w)}^\eta
= \mathcal{O}_{C(w)}^\eta. \qedhere
\end{equation*} \end{proof}
\begin{lemma}\label{lem:two_cells}
Let $s_\beta \in \Pi$ and $C \in W_\Theta \backslash W$ such that $C s_\beta = C$. Write $\iota: C(w^C) \hookrightarrow C(w^C) \cup C(w^C s_\beta)$ for the inclusion. Then for any $\mathcal{F} \in \Mod_{coh}(\mathcal{D}_{C(w^C) \cup C(w^C s_\beta)},N,\eta)$,
\begin{equation*}
\mathcal{F} = \iota_+ \iota^! \mathcal{F} = ( \iota_{+} \mathcal{O}_{C(w^C)}^\eta )^{\oplus \rank \iota^! \mathcal{F}}
\end{equation*}
where $\rank$ stands for the rank as a free $\mathcal{O}$-module. \end{lemma}
\begin{proof}
Write $w = w^C$. The assumption implies that $w s_\beta \in C$, $w s_\beta < w$, and that $C(w)$ and $C(w s_\beta)$ are open and closed in $C(w) \cup C(w s_\beta)$, respectively.
Since the category of $\eta$-twisted Harish-Chandra sheaves on $C(w)$ is semisimple, $\iota^! \mathcal{F}$ is a direct sum of copies of $\mathcal{O}_{C(w)}^\eta$. This implies the second equality. For the first equality, adjunction gives a map
\begin{equation}\label{eqn:iota_w_adjunction}
\mathcal{F} \to \iota_{+} \iota^! \mathcal{F}
\end{equation}
whose kernel and cokernel are supported on $C(w s_\beta)$, which are equal to direct images of $\eta$-twisted Harish-Chandra sheaves on $C(w s_\beta)$ by Kashiwara's equivalence. But $w s_\beta$ is not the longest element in $C$, so there is no such module on $C(w s_\beta)$ except zero. Hence (\ref{eqn:iota_w_adjunction}) is an isomorphism, which establishes the first equality. \end{proof}
\begin{proposition}\label{lem:I_fixes_std}
Let $C \in W_\Theta \backslash W$, $\beta \in \Pi$ such that $C s_\beta = C$. Then
\begin{equation*}
LI_{s_\beta} \mathcal{I}(w^C,\lambda,\eta) = \mathcal{I}(w^C, s_\beta \lambda,\eta).
\end{equation*} \end{proposition}
\begin{proof}
Write $w = w^C$. Let
\begin{equation*}
F = Z_{s_\beta} \times_{p_2,X,i_w} C(w) = \{ (x,y) \in X \times C(w) \mid \mathfrak{b}_x \text{ and } \mathfrak{b}_y \text{ are in relative position } s_\beta \}.
\end{equation*}
and let $S$ be as in \ref{lem:S}. Then $S$ is a subvariety of $F$. It's easy to see that
\begin{equation*}
p_1(F) = \{x \in X \mid \,\exists\, y \in C(w) \text{ such that } \mathfrak{b}_x \text{ and } \mathfrak{b}_y \text{ are in relative position } s_\beta \} = C(w) \cup C(w s_\beta).
\end{equation*}
So we have the following diagram
\begin{equation}\label{diag:SF}
\begin{tikzcd}
&[-5ex] S \ar[dl,"p_1|_S"'] \ar[dr, "a_S"] & [-4ex]\\
C(w) \ar[dr, "\iota_w"] \ar[ddrr, bend right, distance=3cm, "i_w"'] & & F \ar[dl, "p_1|_F"'] \ar[dr, "i_F"'] \ar[rr, equal] & & F \ar[dl, "i_F"] \ar[dr, "p_2|_F"]\\
& C(w) \cup C(w s_\beta) \ar[dr, "j_w"] && Z_{s_\beta} \ar[dl, "p_1"'] \ar[dr, "p_2"] && C(w) \ar[dl, "i_w"']\\
&& X && X
\end{tikzcd}
\end{equation}
The right-most square is Cartesian by definition of $F$. The top-left square is also Cartesian, i.e. $S$ is the preimage of $C(w)$ along $p_1: F \to C(w) \cup C(w s_\beta)$. By definition of intertwining functors and base change,
\begin{align}
LI_{s_\beta} \mathcal{I}(w,\lambda,\eta)
&= \mathcal{O}_X(\rho - s_\beta \rho) \dotimes_{\mathcal{O}_X} p_{1+} p_2^+ i_{w+} \mathcal{O}_{C(w)}^\eta \nonumber\\
&= \mathcal{O}_X(\rho - s_\beta \rho) \dotimes_{\mathcal{O}_X} p_{1+} i_{F+} (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta \nonumber\\
&= \mathcal{O}_X(\rho - s_\beta \rho) \dotimes_{\mathcal{O}_X} j_{w+} (p_1|_F)_+ (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta. \label{eqn:LI(std)_step_a}
\end{align}
We claim that $(p_1|_F)_+ (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta = \iota_{w+} \mathcal{O}_{C(w)}^\eta$.
By \ref{lem:two_cells},
\begin{equation*}
(p_1|_F)_+ (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta = \iota_{w+} \iota_w^! (p_1|_F)_+ (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta.
\end{equation*}
Apply base change using the top-left square in (\ref{diag:SF}),
\begin{equation*}
\iota_w^! (p_1|_F)_+ (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta
= (p_1|_S)_+ a_S^! (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta
= (p_1|_S)_+ a_S^+ (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta
\end{equation*}
Note that $p_2|_F \circ a_S = p_2|_S$. Hence, by \ref{lem:S}, the sheaf in the above equation is isomorphic to $\mathcal{O}_{C(w)}^\eta$. Therefore
\begin{equation*}
(p_1|_F)_+ (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta = \iota_{w+} \iota_w^! (p_1|_F)_+ (p_2|_F)^+ \mathcal{O}_{C(w)}^\eta = \iota_{w+} \mathcal{O}_{C(w)}^\eta
\end{equation*}
as claimed. As a result,
\begin{align*}
(\ref{eqn:LI(std)_step_a})
&= \mathcal{O}_X(\rho - s_\beta \rho) \dotimes_{\mathcal{O}_X} j_{w+} \iota_{w+} \mathcal{O}_{C(w)}^\eta\\
&= \mathcal{O}_X(\rho - s_\beta \rho) \dotimes_{\mathcal{O}_X} i_{w+} \mathcal{O}_{C(w)}^\eta\\
&= \mathcal{I}(w, s_\beta \lambda, \eta)
\end{align*}
which proves the proposition. \end{proof}
\begin{corollary}\label{lem:I_on_std}
Let $\beta \in \Pi - \Pi_\lambda$. Let $C \in W_\Theta \backslash W$. Then
\begin{align*}
I_{s_\beta} \mathcal{I}(w^C,\lambda,\eta) &= \mathcal{I}(w^{C s_\beta}, s_\beta\lambda, \eta),\\
I_{s_\beta} \mathcal{L}(w^C,\lambda,\eta) &= \mathcal{L}(w^{C s_\beta}, s_\beta\lambda, \eta)
\end{align*}
(note that we have $w^{C s_\beta}$ instead of $w^C s_\beta$ on the right hand sides). \end{corollary}
\begin{proof}
The statement about standard modules is the combination of \ref{lem:I_moves_std} and \ref{lem:I_fixes_std}. Since $I_{s_\beta}$ is an equivalence of categories, it must send the unique irreducible submodule of $\mathcal{I}(w^C,\lambda,\eta)$ to the unique irreducible submodule of $\mathcal{I}(w^{C s_\beta}, s_\beta\lambda, \eta)$, i.e. it must send $\mathcal{L}(w^C,\lambda,\eta)$ to $\mathcal{L}(w^{C s_\beta}, s_\beta\lambda, \eta)$. \end{proof}
Next, we show that non-integral intertwining functors also preserves pullback of irreducible modules to strata.
\begin{proposition}\label{thm:Is_pullback}
Let $\beta \in \Pi - \Pi_\lambda$, $C,D \in W_\Theta \backslash W$ and $p \in \mathbb{Z}$. Then
\begin{equation*}
\rank H^p i_{w^D}^! \mathcal{L}(w^C, \lambda,\eta) = \rank H^p i_{w^{D s_\beta}}^! \mathcal{L}(w^{C s_\beta}, s_\beta \lambda, \eta).
\end{equation*} \end{proposition}
The proof we give below uses the same tools as in the previous proposition. There is an alternative proof which we briefly mention. One shows that $\rank H^p i_{w^D}^! \mathcal{L}(w^C, \lambda,\eta)$ is the same as the dimension of the $p$-th $\mathcal{D}_\lambda$-module $\Ext$ group of $\mathcal{M}(w^C,\lambda,\eta)$ and $\mathcal{L}(w^C,\lambda,\eta)$ using facts on derived categories of highest weight categories (Brown-Romanov \cite[Theorem 7.2]{Brown-Romanov:Whittaker-Verma-pairing} showed that $\Mod_{coh}(\mathcal{D}_\lambda,N,\eta)$ is a highest weight category). The proposition follows from the fact that $I_{s_\beta}$ is an equivalence of categories and induces an isomorphism on $\Ext$-groups.
\begin{proof}
Write $w = w^D$.
There are two cases, $D s_\beta \neq D$ or $D s_\beta = D$. Consider the first case. Assume $D s_\beta < D$. Then $w^{D s_\beta} = w^D s_\beta = w s_\beta$. Let
\begin{equation*}
F = C(w s_\beta) \times_{i_{w s_\beta}, X, p_1} Z_{s_\beta} = \{ (x,y) \in C(w s_\beta) \times X \mid \mathfrak{b}_x \text{ and } \mathfrak{b}_y \text{ are in relative position } s_\beta \}.
\end{equation*}
Then the second projection $p_2|_F: F \to X$ induces an isomorphism of $F$ onto $C(w)$, and we have the following commuting diagram
\begin{equation*}
\begin{tikzcd}
&[-1ex] F \ar[dl, "p_1|_F"'] \ar[dr, "i_F"'] \ar[rr, equal] & & F \ar[dl,"i_F"] \ar[dr, "p_2|_F", "\cong"']\\
C(w s_\beta) \ar[dr, "i_{w s_\beta}"'] && Z_{s_\beta} \ar[dl, "p_1"'] \ar[dr, "p_2"] && C(w) \ar[dl, "i_w"]\\
& X && X
\end{tikzcd}
\end{equation*}
where the left square is Cartesian. Using base change,
\begin{align}
\rank H^p i_{w^{D s_\beta}}^! \mathcal{L}(w^{C s_\beta} ,s_\beta \lambda, \eta)
&= \rank H^p i_{w s_\beta}^! I_{s_\beta} \mathcal{L}(w^C, \lambda,\eta) \nonumber\\
&= \rank H^p i_{w s_\beta}^! p_{1+} p_2^+ \mathcal{L}(w^C, \lambda,\eta) \label{eqn:rk_pullback_step_a}\\
&= \rank H^p (p_1|_F)_+ (p_2|_F)^! i_w^! \mathcal{L}(w^C, \lambda,\eta) [-1].\label{eqn:rk_pullback_step_b}
\end{align}
\sloppy Here in (\ref{eqn:rk_pullback_step_a}) we did not write the twist by the line bundle $\mathcal{O}_X(\rho-w\rho)$ because there is no twist on $C(w s_\beta)$. Since $\Mod_{coh}(\mathcal{D}_{C(w)},N,\eta)$ is semisimple, $i_w^! \mathcal{L}(w^C, \lambda,\eta)$ is a direct sum of $\mathcal{O}_{C(w)}^\eta$ at different degrees. So $(p_2|_F)^! i_w^! \mathcal{L}(w^C, \lambda,\eta)$ is a direct sum of $\mathcal{O}_F^\eta$ at different degrees by the fact that $p_2|_F$ is an isomorphism, and the rank at degree $p$ being $\rank H^p i_w^! \mathcal{L}(w^C, \lambda,\eta)$. Hence it is enough to compute $(p_1|_F)_+ \mathcal{O}_F^\eta$, for which we use the fact that a map of homogeneous spaces of a unipotent group is isomorphic to a coordinate projection of affine spaces, that is, we have the following commutative diagram where all maps are $N$-equivariant, for some $N$-actions on $\mathbb{A}^1 \times \mathbb{A}^{\ell(w s_\beta)}$ and $\mathbb{A}^{\ell(w s_\beta)}$:
\begin{equation*}
\begin{tikzcd}
F \ar[d,"\cong"'] \ar[r, "p_1|_F"] & C(w s_\beta) \ar[d,"\cong"]\\
\mathbb{A}^1 \times \mathbb{A}^{\ell(w s_\beta)} \ar[r,"pr_1"] & \mathbb{A}^{\ell(w s_\beta)}
\end{tikzcd}.
\end{equation*}
So it suffices to compute $pr_{1+} \mathcal{O}_{\mathbb{A}^1 \times \mathbb{A}^{\ell(w s_\beta)}}^\eta$. Since $pr_1$ is a coordinate projection, $pr_1^+ \mathcal{O}_{\mathbb{A}^{\ell(w s_\beta)}}^\eta = \mathcal{O}_{\mathbb{A}^1} \boxtimes \mathcal{O}_{\mathbb{A}^{\ell(w s_\beta)}}^\eta$ (we remark that, without the assumption of $D s_\beta \neq D$, $w s_\beta$ and $w$ can be in the same right $W_\Theta$-coset, in which case $\mathcal{O}_{\mathbb{A}^{\ell(w s_\beta)}}^\eta$ does not exist). On the other hand, by functoriality of $\eta$-twisted Harish-Chandra sheaves, we must have $pr_1^+ \mathcal{O}_{\mathbb{A}^{\ell(w s_\beta)}}^\eta = \mathcal{O}_{\mathbb{A}^1 \times \mathbb{A}^{\ell(w s_\beta)}}^\eta$. We conclude that
\begin{equation*}
\mathcal{O}_{\mathbb{A}^1 \times \mathbb{A}^{\ell(w s_\beta)}}^\eta = \mathcal{O}_{\mathbb{A}^1} \boxtimes \mathcal{O}_{\mathbb{A}^{\ell(w s_\beta)}}^\eta.
\end{equation*}
As a result, writing $p: \mathbb{A}^1 \to \{*\}$ for the unique morphism to a point,
\begin{align*}
pr_{1+} \mathcal{O}_{\mathbb{A}^1 \times \mathbb{A}^{\ell(w s_\beta)}}^\eta
&= (p_+ \mathcal{O}_{\mathbb{A}^1}) \boxtimes \big( (\operatorname{Id}_{\mathbb{A}^{\ell(w s_\beta)}})_+ \mathcal{O}_{\mathbb{A}^{\ell(w s_\beta)}}^\eta \big) \\
&= \mathbb{C}[1] \boxtimes \mathcal{O}_{\mathbb{A}^{\ell(w s_\beta)}}^\eta\\
&= \mathcal{O}_{\mathbb{A}^{\ell(w s_\beta)}}[1].
\end{align*}
Therefore $(p_1|_F)_+ \mathcal{O}_F^\eta = \mathcal{O}_{C(w s_\beta)}^\eta [1]$ and hence
\begin{align*}
\rank H^p i_{w^{D s_\beta}}^! \mathcal{L}(w^{C s_\beta} ,s_\beta \lambda, \eta)
= (\ref{eqn:rk_pullback_step_b})
= \rank H^p i_w^! \mathcal{L}(w^C, \lambda,\eta).
\end{align*}
Now consider the case $D s_\beta = D$. In this case $w^{D s_\beta} = w^D = w$. Set
\begin{equation*}
F = C(w) \times_{i_w, X, p_1} Z_{s_\beta}
= \{ (x,y) \in C(w) \times X \mid \mathfrak{b}_x \text{ and } \mathfrak{b}_y \text{ are in relative position } s_\beta \}
\end{equation*}
and set $S$ as in \ref{lem:S}, viewed as a subvariety of $F$. Then the following diagram commutes
\begin{equation*}
\begin{tikzcd}
&&&&[-4ex] S \ar[dl, "b_S"'] \ar[dr, "p_2|_S"] &[-5ex]\\
& F \ar[dl, "p_1|_F"'] \ar[dr, "i_F"'] \ar[rr,equal] && F \ar[dl, "i_F"] \ar[dr, "p_2|_F"] && C(w) \ar[dl, "\iota_w"'] \ar[ddll, bend left, distance=3cm,"i_w"]\\
C(w) \ar[dr, "i_w"'] && Z_{s_\beta} \ar[dl, "p_1"'] \ar[dr, "p_2"] && C(w) \cup C(w s_\beta) \ar[dl, "j_w"]\\
& X && X
\end{tikzcd}
\end{equation*}
where the left-most square and the top-right square are Cartesian. Using base change,
\begin{align}
i_{w^{D s_\beta}}^! \mathcal{L}(w^{C s_\beta} ,s_\beta \lambda, \eta)
&= i_w^! I_{s_\beta} \mathcal{L}(w^C, \lambda,\eta) \nonumber\\
&= (p_1|_F)_+ (p_2|_F)^! j_w^! \mathcal{L}(w^C, \lambda,\eta) [-1].\label{eqn:rk_pullback_step_d}
\end{align}
By \ref{lem:two_cells}, $j_w^! \mathcal{L}(w^C, \lambda,\eta) = \iota_{w+} \iota_w^! j_w^! \mathcal{L}(w^C, \lambda,\eta)$. Hence
\begin{align}
(\ref{eqn:rk_pullback_step_d})
&= (p_1|_F)_+ (p_2|_F)^! \iota_{w+} \iota_w^! j_w^! \mathcal{L}(w^C, \lambda,\eta) [-1] \nonumber\\
&= (p_1|_F)_+ b_{S+} (p_2|_S)^+ i_w^! \mathcal{L}(w^C, \lambda,\eta). \label{eqn:rk_pullback_step_e}
\end{align}
Here $p_1|_F \circ b_S = p_1|_S$. Also $i_w^! \mathcal{L}(w^C, \lambda,\eta)$ is a direct sum of $\mathcal{O}_{C(w)}^\eta$ in various degrees. Hence by \ref{lem:S},
\begin{equation*}
\rank H^p i_{w^{D s_\beta}}^! \mathcal{L}(w^{C s_\beta} ,s_\beta \lambda, \eta)
= \rank H^p (\ref{eqn:rk_pullback_step_e})\\
= \rank H^p i_w^! \mathcal{L}(w^C, \lambda,\eta). \qedhere
\end{equation*} \end{proof}
\section{Main algorithm}\label{sec:KL}
In this section, we formulate an algorithm for computing a set of polynomials in $q$ indexed by pairs of right $W_\Theta$-cosets whose evaluation at $q = -1$ lead to the character formula for irreducible modules. This is in the same spirit as ordinary Kazhdan-Lusztig polynomials for the highest weight category. The algorithm we will prove is suggested by Mili\v ci\'c.
In \textsection\ref{subsec:KL_poly}, we define the Whittaker Kazhdan-Lusztig polynomials, the module $\mathcal{H}_\Theta$, and related notations. The statement of the algorithm is contained in \textsection\ref{subsec:algorithm}. Proof of the algorithm is divided into subsections that follow.
\subsection{Whittaker Kazhdan-Lusztig polynomials}\label{subsec:KL_poly}
Recall the sets $A_{\Theta,\lambda} = A_\lambda \cap (w_\Theta {}^\Theta W)$ and ${\Theta(u,\lambda)} \subseteq \Pi_\lambda$ defined in \textsection\ref{subsec:db_coset_xsec} and \textsection\ref{subsec:int_model}. Recall also that we have a partial order on $W_\Theta \backslash W$ inherited from the Bruhat order on ${}^\Theta W$, denoted by $\leqslant$ (\textsection \ref{subsec:WTheta_prelim}). Similarly, we have a partial order on $W_{\lambda,{\Theta(u,\lambda)}} \backslash W_\lambda$ which we denote by $\leqslant_{u,\lambda}$.
Let $\mathcal{H}_\Theta$ be the free $\mathbb{Z}[q,q^{-1}]$-modules with basis $\delta_C$, $C \in W_\Theta \backslash W$. For any $\alpha \in \Pi$, define a $\mathbb{Z}[q,q^{-1}]$-linear operator on $\mathcal{H}_\Theta$ by \begin{equation*}
T_\alpha (\delta_{C}) =
\begin{cases}
q \delta_{C} + \delta_{C s_\alpha} & \text{if } C s_\alpha > C;\\
0 & \text{if } C s_\alpha = C;\\
q^{-1} \delta_{C} + \delta_{C s_\alpha} & \text{if } C s_\alpha < C.
\end{cases} \end{equation*} The operators turn $\mathcal{H}_\Theta$ into a (right) module of the Hecke algebra $\mathcal{H}$, isomorphic to the \textit{antispherical module}, where $T_\alpha$ encodes the action of the Kazhdan-Lusztig basis element $C_{s_\alpha} \in \mathcal{H}$ \cite[\textsection 6.2]{Romanov:Whittaker}. This interpretation is not needed for us. We refer the reader to \cite[\textsection 6.1]{Romanov:Whittaker} for a precise definition of $\mathcal{H}$, and to the introduction of this paper \textsection\ref{subsec:geom_idea} for an explanation of the role of $\mathcal{H}$ in the Kazhdan-Lusztig algorithm.
For an element $u$ in $A_{\Theta,\lambda}$, let $\mathcal{H}_{\Theta(u,\lambda)}$ be the free $\mathbb{Z}[q,q^{-1}]$-module with basis $\delta_{E}$, $E \in W_{\lambda,{\Theta(u,\lambda)}} \backslash W_\lambda$. For any $\alpha \in \Pi_\lambda$ we define the operator $T_\alpha^{u,\lambda}$ in the same way as $T_\alpha$: \begin{equation*}
T_\alpha^{u,\lambda} (\delta_E) =
\begin{cases}
q \delta_E + \delta_{E s_\alpha} & \text{if } E s_\alpha >_{u,\lambda} E;\\
0 & \text{if } E s_\alpha = E;\\
q^{-1} \delta_E + \delta_{E s_\alpha} & \text{if } E s_\alpha <_{u,\lambda} E.
\end{cases} \end{equation*} Then $\mathcal{H}_{\Theta(u,\lambda)}$ becomes a right module of the Hecke algebra $\mathcal{H}_\lambda = \mathcal{H}(W_\lambda)$ of the integral Weyl group.
We will use a left action of $W$ on $\mathcal{H}_\Theta$ defined by $w \cdot \delta_C = \delta_{wC}$. Similarly, a right action of $W$ on $\mathcal{H}_\Theta$ is defined by $\delta_C \cdot w = \delta_{C w}$. We will simply write $w \delta_C$, $\delta_C w$ for the actions, omitting the dots. $w(-)w^{-1}$ then denotes the simultaneous action of $w$ on the left and $w^{-1}$ on the right. By \ref{lem:Is_right_coset}, $s_\beta (-) s_\beta$ defines a bijection \begin{equation*}
s_\beta(-) s_\beta : W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda \xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}} W_{s_\beta \lambda, \Theta(r, s_\beta\lambda)} \backslash W_{s_\beta \lambda} \end{equation*} where $r$ is the unique element in $A_{\Theta,s_\beta \lambda} \cap W_\Theta u s_\beta W_{s_\beta \lambda}$. We extend this to an isomorphism \begin{equation*}
s_\beta (-) s_\beta : \mathcal{H}_{\Theta(u,\lambda)} \xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}} \mathcal{H}_{\Theta(r, s_\beta\lambda)},\quad
\delta_{E} \mapsto \delta_{s_\beta E s_\beta}. \end{equation*}
Recall that we have a bijection \begin{equation*}
(-)|_\lambda : W_\Theta \backslash W \to \bigcup_{u \in A_{\Theta,\lambda}} W_{\lambda, {\Theta(u,\lambda)}} \backslash W_\lambda \end{equation*}
defined in \ref{not:right_coset_partition}. We extend $(-)|_\lambda$ to a map \begin{equation*}
(-)|_\lambda: \mathcal{H}_\Theta \xrightarrow{\,\raisebox{-1.2ex}[0ex][1ex]{$\widesim[1]{}$\,}} \bigoplus_{u \in A_{\Theta,\lambda}} \mathcal{H}_{\Theta(u,\lambda)},\quad
\delta_C \mapsto \delta_{C|_\lambda}. \end{equation*}
The following theorem, proven in \cite[Theorem 11]{Romanov:Whittaker}, defines a set of polynomials indexed by pairs of right cosets called \textit{Whittaker Kazhdan-Lusztig polynomials}. For a right coset $E \in W_\Theta \backslash W$, we write $(W_\Theta \backslash W)_{\leqslant E}$ for the set of those cosets $F$ such that $F \leqslant E$.
\begin{deftheorem}[Whittaker Kazhdan-Lusztig polynomials for $(W,\Pi,\Theta)$]\label{def:parabolic_KL_poly_Theta}
For any $E \in W_\Theta \backslash W$, there exists a unique set of polynomials $\{P_{CD}\} \subset q \mathbb{Z}[q]$ indexed by
\begin{equation*}
\{(C,D) \mid C,D \in (W_\Theta \backslash W)_{\leqslant E} ; D < C\}
\end{equation*}
such that the function
\begin{equation*}
\psi: (W_\Theta \backslash W)_{\leqslant E} \xrightarrow{\;\;\;\;\;} \mathcal{H}_\Theta, \quad
C \mapsto \delta_{C} + \sum_{D < C} P_{CD} \delta_{D}
\end{equation*}
satisfies the following property: for any $C \in W_\Theta \backslash W$ with $C \neq W_\Theta$, there exist $\alpha \in \Pi$ and $c_{D} \in \mathbb{Z}$ such that $C s_\alpha < C$ and
\begin{equation*}
T_\alpha (\psi(C s_\alpha)) = \sum_{D \leqslant C} c_{D} \psi(D).
\end{equation*}
Moreover, the polynomials $P_{CD}$ do not depend on the choice of $E$. The polynomials $P_{CD}$ are called \textbf{Whittaker Kazhdan-Lusztig polynomials}, and the elements $\psi(F)$ are called \textbf{Kazhdan-Lusztig basis elements} of $\mathcal{H}_\Theta$.\footnote{Romanov actually denotes the map by $\varphi$. We reserve the notation $\varphi$ to be used in the main algorithm \ref{thm:KL_alg}.} \end{deftheorem}
It is verified in \cite[\textsection 6.3 Remark 4]{Romanov:Whittaker} that the Whittaker Kazhdan-Lusztig polynomials $P_{CD}$ agree with the parabolic Kazhdan-Lusztig polynomials $n_{y,x}$ in \cite[\textsection 3 Remark 3.2]{Soergel:KL_Tilting} for $x = w_\Theta w^C$ and $y = w_\Theta w^D$ (recall that $w_\Theta$ is the longest element in $W_\Theta$). \cite[\textsection 6]{Romanov:Whittaker} also contains comparisons of $P_{CD}$ with various polynomials defined in other sources.
We apply the same definition to $(W_\lambda,\Pi_\lambda,\Theta(u,\lambda))$:
\begin{deftheorem}[Whittaker Kazhdan-Lusztig polynomials for $(W_\lambda,\Pi_\lambda,\Theta(u,\lambda))$]\label{def:parabolic_KL_poly}
For any $E \in W_{\lambda,{\Theta(u,\lambda)}} \backslash W_\lambda$, there exists a unique set of polynomials $\{P_{FG}^{u,\lambda}\} \subset q \mathbb{Z}[q]$ indexed by
\begin{equation*}
\{(F,G) \mid F,G \in (W_{\lambda,\Theta(u,\lambda)} \backslash W_\lambda)_{\leqslant_{u,\lambda} E} ; G <_{u,\lambda} F\}
\end{equation*}
such that the function
\begin{equation*}
\psi_{u,\lambda}: (W_{\lambda,{\Theta(u,\lambda)}} \backslash W_\lambda)_{\leqslant_{u,\lambda} E} \xrightarrow{\;\;\;\;\;} \mathcal{H}_{\Theta(u,\lambda)},\quad
F \mapsto \delta_{F} + \sum_{G <_{u,\lambda} F} P_{FG}^{u,\lambda} \delta_{G}
\end{equation*}
satisfies the following property: for any $F \in W_{\lambda,{\Theta(u,\lambda)}} \backslash W_\lambda$ with $F \neq W_{\lambda,\Theta(u,\lambda)}$, there exist $\alpha \in \Pi_\lambda$ and $c_{G} \in \mathbb{Z}$ such that $F s_\alpha <_{u,\lambda} F$ and
\begin{equation}\label{eqn:parabolic_KL_poly}
T_\alpha^{u,\lambda} (\psi_{u,\lambda}(F s_\alpha)) = \sum_{G \leqslant_{u,\lambda} F} c_{G} \psi_{u,\lambda}(G).
\end{equation}
Moreover, the polynomials $P_{FG}^{u,\lambda}$ do not depend on the choice of $E$. The polynomials $P_{FG}^{u,\lambda}$ are called \textbf{Whittaker Kazhdan-Lusztig polynomials}, and the elements $\psi_{u,\lambda}(F)$ are called \textbf{Kazhdan-Lusztig basis elements} of $\mathcal{H}_{\Theta(u,\lambda)}$. \end{deftheorem}
We will write $P_{CD}^{u,\lambda}$ instead of $P_{C|_\lambda, D|_\lambda}^{u,\lambda}$ for convenience. We set $P_{EE}^{u,\lambda} = 1$ for all $E$.
If we apply the above definitions to the special case $\eta = 0$, we recover the ordinary Kazhdan-Lusztig polynomials $P_{wv}$ for $W$ and $P_{wv}^\lambda$ for $W_\lambda$, respectively. They are related to the polynomials $P_{v,w}$ in \cite{Kazhdan-Lusztig:Hecke_Alg} by $P_{wv}(q) = q^{\ell(w)-\ell(v)} P_{v,w}(q^{-2})$, and our Kazhdan-Lusztig basis element $\psi(w)$ is the same as $\overline{C_w}$ in \textit{op. cit.}
\subsection{Main algorithm}\label{subsec:algorithm}
Recall from \textsection \ref{subsec:geom_prelim} that $i_{w^D}: C(w^D) \to X$ is the inclusion map of the Schubert cell $C(w^D)$. Recall also that the category $\Mod_{coh}(\mathcal{D}_{C(w^D)},N,\eta)$ is semisimple and $\mathcal{O}_{C(w^D)}^\eta$ is the unique irreducible object. Therefore, any complex $\mathcal{V}^\bullet$ of modules in this category is a direct sum of $\mathcal{O}_{C(w^D)}^\eta$ at various degrees. We write $\chi_q \mathcal{V}^\bullet$ for its generating function in variable $q$, i.e. \begin{equation}\label{eqn:defn_of_chi_q}
\chi_q \mathcal{V}^\bullet = \sum_{p \in \mathbb{Z}} \big( \rank H^p \mathcal{V}^\bullet \big) q^p. \end{equation} Define the map \begin{equation}\label{eqn:defn_of_nu}
\nu: \operatorname{Obj} \Mod_{coh}(\mathcal{D}_\lambda,N,\eta) \xrightarrow{\;\;\;\;\;} \mathcal{H}_\Theta,\quad
\mathcal{F} \mapsto \sum_{D \in W_\Theta \backslash W} \big( \chi_q i_{w^D}^! \mathcal{F} \big) \delta_D. \end{equation} Clearly, this map can be defined for any complexes of $\mathcal{D}_\lambda$-modules with $\eta$-twisted $N$-equivariant cohomologies. The following easy property of $\nu$ is immediate:
\begin{lemma}\label{lem:nu_std}
\begin{equation*}
\nu(\mathcal{I}(w^C,\lambda,\eta)) = \delta_C.
\end{equation*} \end{lemma}
\begin{proof}
Let $D \in W_\Theta \backslash W$. Then $i_{w^D}^! \mathcal{I}(w^C,\lambda,\eta) = i_{w^D}^! i_{w^C+} \mathcal{O}_{C(w^C)}^\eta$. By base change theorem, this is $\mathcal{O}_{C(w^C)}^\eta$ if $C = D$ and is $0$ otherwise. Hence the claim follows by the definition of $\nu$. \end{proof}
\begin{theorem}[Kazhdan-Lusztig Algorithm for Whittaker modules]\label{thm:KL_alg}
Fix a character $\eta: \mathfrak{n} \to \mathbb{C}$. For any $\lambda \in \mathfrak{h}^*$, there exists a unique map
\begin{equation*}
\varphi_\lambda: W_\Theta \backslash W \xrightarrow{\;\;\;\;\;} \mathcal{H}_\Theta
\end{equation*}
such that for any $C \in W_\Theta \backslash W$, if we write $u$ for the unique element in $A_{\Theta,\lambda}$ such that $C$ is contained in $W_\Theta u W_\lambda$, the following conditions hold:
\begin{enumerate}
\item for some $P_{CD}^{u,\lambda} \in q \mathbb{Z}[q]$,
\begin{equation*}
\varphi_\lambda(C) = \delta_C +
\sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda\\%
D <_{u,\lambda} C}} P_{CD}^{u,\lambda} \delta_D.
\end{equation*}
\item for any $\alpha \in \Pi \cap \Pi_\lambda$ with $C s_\alpha < C$, there exist $c_D \in \mathbb{Z}$ such that
\begin{equation*}
T_\alpha( \varphi_\lambda (C s_\alpha) ) =
\sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda \\ D \leqslant_{u,\lambda} C}} c_D \varphi_\lambda( D)
\end{equation*}
\item for any $\beta \in \Pi - \Pi_\lambda$ such that $C s_\beta < C$,
\begin{equation*}
\varphi_{s_\beta \lambda} (C s_\beta) = \varphi_\lambda (C) s_\beta
\end{equation*}
(recall that the action $\mathcal{H}_\Theta \righttoleftarrow W$ is given by $\delta_C \cdot w = \delta_{C w}$).
\item The polynomials $P_{CD}^{u,\lambda}$ are parabolic Kazhdan-Lusztig polynomials for $(W_\lambda,\Pi_\lambda, {\Theta(u,\lambda)})$.
\end{enumerate}
Moreover, the map $\varphi_\lambda$ is given by
\begin{equation*}
\varphi_\lambda(C) = \nu(\mathcal{L}(w^C,\lambda,\eta)).
\end{equation*} \end{theorem}
If $\lambda$ is integral, this reduces to the main theorem of Romanov \cite[Theorem 11]{Romanov:Whittaker}. Readers can go back to \textsection\ref{subsec:geom_idea} for an explanation of the meaning of the algorithm and an outline of the proof.
Let us begin the proof of the theorem. Uniqueness is determined by (1), (4), and the uniqueness of Whittaker Kazhdan-Lusztig polynomials. For existence, we will show that $\varphi_\lambda(C) = \nu(\mathcal{L}(w^C,\lambda))$ satisfies the requirements (1)-(4) by induction on $\ell(w^C)$.
Consider the base case $\ell(w^C) = \ell(w_\Theta)$, that is, $C = W_\Theta$, $w^C = w_\Theta$. The argument for this case in the same as in \cite{Romanov:Whittaker}. Any composition factor of the standard module $\mathcal{I}(w_\Theta,\lambda,\eta)$ is supported on cells $C(w)$ in the closure of $C(w_\Theta)$. But any such $w$ are in $W_\Theta$ with $w \leqslant w_\Theta$. In particular, $w$ is not the longest element in its right $W_\Theta$-coset unless $w = w_\Theta$. So there is no module supported on $C(w)$ unless $w = w_\Theta$. Hence the only composition factors are supported on $C(w_\Theta)$. By pulling back to $C(w_\Theta)$, we see that there is only one such factor, namely $\mathcal{L}(w_\Theta,\lambda,\eta)$. Thus $\mathcal{I}(w_\Theta,\lambda,\eta) = \mathcal{L}(w_\Theta,\lambda,\eta)$. As a result \begin{equation*}
\nu(\mathcal{L}(w_\Theta,\lambda,\eta))
= \nu(\mathcal{I}(w_\Theta,\lambda,\eta))
= \delta_{W_\Theta} \end{equation*} by \ref{lem:nu_std}. Therefore, the function $\varphi_\lambda(C)$ satisfies (1) for $C = W_\Theta$. The conditions (2) and (3) are void, and (4) is trivially true. This completes the base case.
Now consider the case $\ell(w^C) = k > \ell(w_\Theta)$. The verification of (1)-(4) for $C$ is divided into subsections.
\subsection{Verification of \ref{thm:KL_alg}(3) for $\ell(w^C) = k$}\label{subsec:(3)}
Assume $\beta \in \Pi - \Pi_\lambda$ is such that $C s_\beta < C$. By definition, \begin{align*}
\varphi_\lambda(C) s_\beta
&= \left( \sum_{D \in W_\Theta \backslash W} \big( \chi_q i_{w^D}^! \mathcal{L}(w^C,\lambda,\eta) \big) \delta_D \right) s_\beta\\
&= \sum_{D \in W_\Theta \backslash W} \big( \chi_q i_{w^D}^! \mathcal{L}(w^C,\lambda,\eta) \big) \delta_{D s_\beta} \end{align*} and \begin{align*}
\varphi_{s_\beta \lambda} (C s_\beta)
&= \sum_{D \in W_\Theta \backslash W} \big( \chi_q i_{w^D}^! \mathcal{L}(w^{Cs_\beta}, s_\beta\lambda,\eta) \big) \delta_D\\
&= \sum_{D \in W_\Theta \backslash W} \big( \chi_q i_{w^{Ds_\beta}}^! \mathcal{L}(w^{Cs_\beta}, s_\beta\lambda,\eta) \big) \delta_{D s_\beta} \end{align*} where in the last equality we rearranged the sum by the bijection $W_\Theta \backslash W \xrightarrow{\raisebox{-1.2ex}[0ex][1ex]{$\sim$}} W_\Theta \backslash W$, $D \mapsto D s_\beta$. Hence it suffices to show \begin{equation*}
\chi_q i_{w^D}^! \mathcal{L}(w^C,\lambda,\eta) = \chi_q i_{w^{Ds_\beta}}^! \mathcal{L}(w^{Cs_\beta}, s_\beta\lambda,\eta) \end{equation*} for any $D \in W_\Theta \backslash W$, or equivalently \begin{equation*}
\rank H^p i_{w^D}^! \mathcal{L}(w^C,\lambda,\eta)
= \rank H^p i_{w^{D s_\beta}}^! \mathcal{L}(w^{C s_\beta},s_\beta\lambda, \eta) \end{equation*} for any $D$ and any $p \in \mathbb{Z}$. This follows by Proposition \ref{thm:Is_pullback}.
\subsection{Verification of \ref{thm:KL_alg}(2) for $\ell(w^C) = k$}\label{subsec:alg_(2)}
This part of the argument is modified from \cite[V.2]{Milicic:Localization} and is almost identical to \cite[\textsection 5]{Romanov:Whittaker} with slight modifications. Instead of reproving all the details, we briefly review Romanov's argument and point out the main change for our situation.
Let us assume for the moment that $\lambda$ is integral. Suppose $\alpha \in \Pi$ and $C s_\alpha < C$. Writing $\varphi_\lambda(C) = \nu(\mathcal{L}(w^C,\lambda,\eta))$, \ref{thm:KL_alg}(2) reads \begin{equation}\label{eqn:T_condition_rephrased_geom}
(T_\alpha \circ \nu)(\mathcal{L}(w^C s_\alpha,\lambda,\eta)) = \sum_{D \leqslant C} c_D \; \nu(\mathcal{L}(w^D,\lambda,\eta)). \end{equation} In order to prove this, let $X_\alpha$ be the partial flag variety with natural projection $p_\alpha: X \to X_\alpha$. Define the functor \begin{equation*}
U_\alpha := p_\alpha^+ p_{\alpha_+}. \end{equation*} By the decomposition theorem \cite{Mochizuki:Decomp} and a careful study of (integral) intertwining functors, one can show that, for some integers $c_D$, \begin{equation}\label{eqn:U_semisimple}
U_\alpha \mathcal{L}(w^C s_\alpha, \lambda,\eta) = \bigoplus_{D \leqslant C} \mathcal{L}(w^D,\lambda,\eta)^{\oplus c_D} \end{equation} (\cite[Lemma 17]{Romanov:Whittaker}; cf. \cite[Chapter 5 Lemma 2.7]{Milicic:Localization}). Then one shows that \begin{equation}\label{eqn:U_lifts_T}
(T_\alpha \circ \nu)(\mathcal{L}(w^C s_\alpha,\lambda,\eta)) = (\nu \circ U_\alpha)(\mathcal{L}(w^C s_\alpha,\lambda,\eta)). \end{equation}
In other words, $U_\alpha$ is the ``geometric lift'' of the combinatorially defined operator $T_\alpha$. Once this equality is established, (\ref{eqn:U_semisimple}) and (\ref{eqn:U_lifts_T}) together lead to (\ref{eqn:T_condition_rephrased_geom}), which proves \ref{thm:KL_alg}(2). Therefore it remains to prove the equality (\ref{eqn:U_lifts_T}). To do this, first notice that $T_\alpha$ restricts to an operation on the $\mathbb{Z}[q^{\pm1}]$-submodule spanned by $\delta_D$ and $\delta_{D s_\alpha}$ and $U_\alpha$ operate on the abelian subgroup of $K \Mod_{coh}(\mathcal{D}_\lambda,N,\eta)$ spanned by the classes of $\mathcal{I}(w^D,\lambda,\eta)$ and $\mathcal{I}(w^D s_\alpha,\lambda,\eta)$ (in fact one can easily show that $T_\alpha \circ \nu$ and $\nu \circ U_\alpha$ agree on standard modules). Therefore it is natural to restrict $U_\alpha \mathcal{L}(w^C s_\alpha,\lambda,\eta)$ to the subvariety $X_O := C(w^D) \cup C(w^D s_\alpha)$. Since $X_O$ is the union of two Schubert cells, one can apply the distinguished triangle for local cohomology to $U_\alpha \mathcal{L}(w^C s_\alpha,\lambda,\eta)|_{X_O}$ and take the long exact sequence on cohomologies. Using a parity degree vanishing property of Whittaker Kazhdan-Lusztig polynomials \begin{equation*}
P_{CD} \in \mathbb{Z}[q^{\pm 2}] \cdot q^{\ell(w^C) - \ell(w^D)} \end{equation*}
(\cite[Lemma 15]{Romanov:Whittaker}), this long exact sequence splits into short exact sequences, producing a description of $U_\alpha \mathcal{L}(w^C s_\alpha,\lambda,\eta)|_{C(w^D)}$ in terms of the ranks of $\mathcal{L}(w^C s_\alpha,\lambda,\eta)|_{C(w^D)}$. (\ref{eqn:U_lifts_T}) is proven by plugging these descriptions into the definition of $(\nu \circ U_\alpha)(\mathcal{L}(w^C s_\alpha,\lambda,\eta))$ and comparing with the left side.
Now let $\lambda$ be general, and let $\alpha \in \Pi \cap \Pi_\lambda$ so that $C s_\alpha < C$. In this case, the $U_\alpha$ functor still exists. In more detail, the definition of $U_\alpha$ requires the existence of a twisted sheaf of differential operators $\mathcal{D}_{X_\alpha,\lambda}$ on $X_\alpha$ whose pullback to $X$ is $\mathcal{D}_\lambda$. Such existence is equivalent to $\alpha^\vee(\lambda) = -1$. Since $\alpha$ is assumed to be integral to $\lambda$, the condition $\alpha^\vee(\lambda) = -1$ can be achieved by twisting $\lambda$ by an integral weight, which can be done by twisting $\mathcal{D}$-modules by a line bundle.
Roughly the same argument applies to this situation, except now \ref{thm:KL_alg}(2) becomes \begin{equation*}
(T_\alpha \circ \nu)(\mathcal{L}(w^C s_\alpha,\lambda,\eta)) = \sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda \\ D \leqslant_{u,\lambda} C}} c_D \; \nu(\mathcal{L}(w^D,\lambda,\eta)) \end{equation*} which has fewer terms on the right side compared to (\ref{eqn:T_condition_rephrased_geom}). Namely, the summand $\nu(\mathcal{L}(w^D,\lambda,\eta))$ does not appear if either $D$ is not in the same double coset as $C$, or $D \not\leqslant_{u,\lambda} C$. To get this restricted sum, one uses the induction assumption \ref{thm:KL_alg}(1) on $\nu(\mathcal{L}(w^C s_\alpha,\lambda,\eta))$ and shows that \begin{equation}\label{eqn:U_semisimple_restricted}
U_\alpha \mathcal{L}(w^C s_\alpha, \lambda,\eta) = \bigoplus_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda \\ D \leqslant_{u,\lambda} C}} \mathcal{L}(w^D,\lambda,\eta)^{\oplus c_D} \end{equation} during the course of proving the lifting (\ref{eqn:U_lifts_T}). Thus \begin{align*}
T_\alpha( \varphi_\lambda(C s_\alpha))
&= \nu( U_\alpha \mathcal{L}(w^C s_\alpha,\lambda,\eta) )\\
&= \nu\Big( \bigoplus_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda\\ D \leqslant_{u,\lambda} C}} \mathcal{L}(w^D, \lambda,\eta)^{\oplus c_D} \Big) \\
&= \sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda\\ D \leqslant_{u,\lambda} C}} c_D \nu (\mathcal{L}(w^D, \lambda,\eta)) \\
&= \sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda\\ D \leqslant_{u,\lambda} C}} c_D \varphi_\lambda(D). \end{align*} and \ref{thm:KL_alg}(2) is verified for $C$.
\subsection{Verification of \ref{thm:KL_alg}(1) for $\ell(w^C) = k$}\label{subsec:(1)}
The idea of this proof (and the proof of \ref{thm:KL_alg}(4)) is to find a simple reflection $s$ so that $C s< C$ and deduce information of $\mathcal{L}(w^C,\lambda,\eta)$ from that of $\mathcal{L}(w^{Cs},\lambda,\eta)$. If $s$ is non-integral, we can use non-integral intertwining functor $I_s$ to translate properties of $C s$ to $C$. If $s$ is integral, we then use information about the $U$-functor.
Suppose there exists $\beta \in \Pi - \Pi_\lambda$ such that $C s_\beta < C$. Then the non-integral intertwining functor $I_{s_\beta}$ sends $\mathcal{L}(w^C,\lambda,\eta)$ to $\mathcal{L}(w^{C s_\beta},s_\beta\lambda,\eta)$, allowing us to translate induction assumption for the latter module to the former. Since the induction hypothesis applies for $C s_\beta$, from \ref{thm:KL_alg}(1) for $C s_\beta$ and $s_\beta \lambda$, we obtain \begin{equation*}
\varphi_{s_\beta \lambda} (C s_\beta)
= \delta_{C s_\beta}
+ \sum_{\substack{ D \in W_\Theta \backslash W_\Theta r W_{s_\beta \lambda}\\ D <_{r,s_\beta \lambda} C s_\beta}}
Q_{D} \delta_D, \end{equation*} for some polynomials $Q_{D} \in q \mathbb{Z}[q]$, where $r$ is the unique element in $A_{\Theta,s_\beta \lambda}$ such that $C s_\beta \in W_\Theta \backslash W_\Theta r W_{s_\beta \lambda}$. Applying \ref{thm:KL_alg}(3) for $C$ (which was already proven), \begin{equation*}
\varphi_\lambda(C)
= \varphi_{s_\beta \lambda} (C s_\beta) s_\beta
= \delta_C
+ \sum_{\substack{ D \in W_\Theta \backslash W_\Theta r W_{s_\beta \lambda}\\ D <_{r,s_\beta \lambda} C s_\beta}}
Q_{D} \delta_{D s_\beta}. \end{equation*} We want to rewrite the subscript of the sum. By Lemma \ref{lem:Is_pres_lowest_db_coset} and its corollary, there exists $w \in W_\Theta$ with $w r = u s_\beta$. Hence \begin{align*}
W_\Theta r W_{s_\beta \lambda}
&= W_\Theta w r s_\beta W_\lambda s_\beta\\
&= W_\Theta u s_\beta s_\beta W_\lambda s_\beta\\
&= W_\Theta u W_\lambda s_\beta, \end{align*} and we see that $D \in W_\Theta \backslash W_\Theta r W_{s_\beta \lambda}$ if and only if $D s_\beta \in W_\Theta \backslash W_\Theta u W_\lambda$. By Proposition \ref{lem:Is_right_coset}, \begin{equation*}
D <_{r,s_\beta \lambda} C s_\beta \iff D s_\beta <_{u,\lambda} C. \end{equation*} Hence \begin{align*}
\varphi_\lambda(C) &= \delta_C
+ \sum_{\substack{ D s_\beta \in W_\Theta \backslash W_\Theta u W_\lambda \\ D s_\beta <_{u,\lambda}C}}
Q_{D} \delta_{D s_\beta}\\
&= \delta_C
+ \sum_{\substack{ E \in W_\Theta \backslash W_\Theta u W_\lambda \\ E <_{u,\lambda}C}}
Q_{D} \delta_E \end{align*} for some $Q_{D} \in q \mathbb{Z}[q]$, and \ref{thm:KL_alg}(1) holds for $C$ in this case.
If such $\beta$ does not exist, then there exists a simple integral root $\alpha$ with $C s_\alpha < C$. From (\ref{eqn:U_semisimple_restricted}), we know that $U_\alpha \mathcal{L}(C s_\alpha,\lambda,\eta)$ is a direct sum of various irreducible modules supported in the closure of $C(w^C)$. Moreover, since the support of $U_\alpha \mathcal{L}(C s_\alpha,\lambda,\eta)$ is the closure of ${p_\alpha^{-1}(p_\alpha(C(w^{Cs_\alpha})))}$ which is \textit{equal} to the closure of $C(w^C)$, $\mathcal{L}(w^C,\lambda,\eta)$ has to be a direct summand of $U_\alpha \mathcal{L}(C s_\alpha,\lambda,\eta)$. So the coefficients of the polynomial $\chi_q i_{w^D}^! \mathcal{L}(w^C,\lambda,\eta)$ (which are non-negative integers) must be dominated by those of $\chi_q i_{w^D}^! U_\alpha \mathcal{L}(w^C s_\alpha,\lambda)$. On the other hand, we know from (\ref{eqn:U_semisimple_restricted}) that the latter polynomial vanishes if $D$ is not in $W_\Theta u W_\lambda$ or if $D \not\leqslant_{u,\lambda} C$. So the polynomial $\chi_q i_{w^D}^! \mathcal{L}(w^C,\lambda,\eta)$ also vanishes for those $D$. Hence \begin{align*}
\varphi_\lambda(C)
&= \sum_{D \in W_\Theta \backslash W} \big( \chi_q i_{w^D}^! \mathcal{L}(w^C,\lambda,\eta) \big) \delta_D\\
&= \sum_{\substack{ D \in W_\Theta \backslash W_\Theta u W_\lambda \\D \leqslant_{u,\lambda} C}} \big( \chi_q i_{w^D}^! \mathcal{L}(w^C,\lambda,\eta) \big) \delta_D. \end{align*}
It suffices to compute the remaining coefficients. Consider $D = C$. Since $\mathcal{I}(w^C,\lambda,\eta)$ is a direct image, it contains no section supported in $\partial C(w^C)$ except $0$. The same holds for $\mathcal{L}(w^C,\lambda,\eta)$ being a submodule of $\mathcal{I}(w^C,\lambda,\eta)$. Hence $\mathcal{L}(w^C,\lambda,\eta)|_{X - \partial C(w^C)}$ is a nonzero submodule of $\mathcal{I}(w^C,\lambda,\eta)|_{X - \partial C(w^C)}$. But $\mathcal{I}(w^C,\lambda,\eta)|_{X - \partial C(w^C)}$ is irreducible by Kashiwara's equivalence of categories for the closed immersion $C(w^C) \hookrightarrow X - \partial C(w^C)$, so $\mathcal{L}(w^C,\lambda,\eta)|_{X - \partial C(w^C)} = \mathcal{I}(w^C,\lambda,\eta)|_{X - \partial C(w^C)}$, and their further pullback to $C(w^C)$ is $\mathcal{O}_{C(w^C)}^\eta$. Hence the coefficient of $\delta_C$ is $1$. For $D < C$, we know $H^0 i_{w^D}^!$ takes sections supported in $C(w^D)$. Since $\mathcal{L}(w^C,\lambda,\eta)$ has no section supported in $\partial C(w^C) \supset C(w^D)$, $H^0 i_{w^D}^! \mathcal{L}(w^C,\lambda,\eta) = 0$ and the coefficient of $\delta_D$ has no constant term. Thus \ref{thm:KL_alg}(1) holds for $C$.
\subsection{Verification of \ref{thm:KL_alg}(4) for $\ell(w^C) = k$}\label{subsec:(4)}
Based on our definition of Whittaker Kazhdan-Lusztig polynomials \ref{def:parabolic_KL_poly}, we need to find $\alpha \in \Pi_\lambda$ such that $C s_\alpha <_{u,\lambda} C$ and equation (\ref{eqn:parabolic_KL_poly}) holds for the function \begin{equation*}
\psi_{u,\lambda}(C|_\lambda) := \varphi_\lambda(C)|_\lambda. \end{equation*} See \textsection\ref{subsec:lem_induction} for an explanation of the geometric idea behind this proof.
If $\alpha$ can be chosen to be in $\Pi \cap \Pi_\lambda$, then by the following lemma, (\ref{eqn:parabolic_KL_poly}) follows from \ref{thm:KL_alg}(2) for $C$.
\begin{lemma}
Let $\alpha \in \Pi \cap \Pi_\lambda$. Then for each $u \in A_{\Theta,\lambda}$
\begin{equation*}
(-)|_\lambda \circ T_\alpha = T_\alpha^{u,\lambda} \circ (-)|_\lambda
\end{equation*}
as maps from $\ind_\lambda \mathcal{H}_{\Theta(u,\lambda)} \subseteq \mathcal{H}_\Theta$ to $\mathcal{H}_{\Theta(u,\lambda)}$. In other words, the following diagram commutes
\begin{equation*}
\begin{tikzcd}[column sep=10ex]
\mathcal{H}_\Theta \ar[r, "T_\alpha"] \ar[d, "(-)|_\lambda"'] & \mathcal{H}_\Theta \ar[d, "(-)|_\lambda"]\\
{\displaystyle \bigoplus_{u \in A_{\Theta,\lambda}} \mathcal{H}_{\Theta(u,\lambda)}} \ar[r, "\bigoplus_u T_\alpha^{u,\lambda}"] & {\displaystyle \bigoplus_{u \in A_{\Theta,\lambda}} \mathcal{H}_{\Theta(u,\lambda)}}
\end{tikzcd}
\end{equation*}
(recall the definitions of $T_\alpha^{u,\lambda}$ and $(-)|_\lambda$ in \textsection \ref{subsec:KL_poly}). \end{lemma}
The proof is straightforward. It consist of unwrapping definitions and using the facts that $\ind_\lambda$ (defined in Corollary \ref{lem:right_coset_partition} and in \textsection \ref{subsec:KL_poly}) preserves partial orders on right cosets \ref{thm:int_Whittaker_model}.
If such $\alpha$ cannot be found, we will need to use non-integral intertwining functors to move $\alpha$ to some simple root $s_{\beta_s} \cdots s_{\beta_1} \alpha = z^{-1} \alpha$ and move $\mathcal{L}(w^C,\lambda,\eta)$ to some irreducible module supported on a smaller orbit, then translate the induction assumption there back. The translation requires the following lemma. The proof is similar to the previous lemma, using \ref{lem:Is_right_coset} instead of \ref{thm:int_Whittaker_model}.
\begin{lemma}
Let $\alpha \in \Pi \cap \Pi_\lambda$, $\beta \in \Pi - \Pi_\lambda$. For any $u \in A_{\Theta,\lambda}$, let $r \in A_{\Theta,s_\beta\lambda}$ be the unique element such that $W_\Theta u s_\beta W_{s_\beta\lambda} = W_\Theta r W_{s_\beta \lambda}$. Then
\begin{equation*}
(s_\beta (-) s_\beta ) \circ T_\alpha^{u,\lambda} = T_{s_\beta \alpha}^{r,s_\beta\lambda} \circ (s_\beta (-) s_\beta)
\end{equation*}
as maps from $\mathcal{H}_{\Theta(u,\lambda)}$ to $\mathcal{H}_{\Theta(r,s_\beta\lambda)}$, where $s_\beta (-) s_\beta$ denotes conjugation by $s_\beta$. In other words, the following diagram commutes
\begin{equation*}
\begin{tikzcd}[column sep=10ex]
\mathcal{H}_{\Theta(u,\lambda)} \ar[r, "T_\alpha^{u,\lambda}"] \ar[d, "s_\beta(-)s_\beta"'] & \mathcal{H}_{\Theta(u,\lambda)} \ar[d, "s_\beta(-)s_\beta"]\\
\mathcal{H}_{\Theta(r,s_\beta\lambda)} \ar[r, "T_{s_\beta\alpha}^{r,s_\beta\lambda}"] & \mathcal{H}_{\Theta(r,s_\beta\lambda)}.
\end{tikzcd}
\end{equation*} \end{lemma}
Choose $\alpha \in \Pi_\lambda$, $s \geqslant 0$ and $\beta_1,\ldots,\beta_s \in \Pi$ such that if we write $z_0 = 1$, $z_i = s_{\beta_1} \cdots s_{\beta_i}$ and $z = z_s$, the following conditions hold: \begin{enumerate}[label=(\alph*)]
\item for any $0 \leqslant i \leqslant s-1$, $\beta_{i+1}$ is non-integral to $z_i^{-1} \lambda$;
\item $z^{-1} \alpha \in \Pi \cap \Pi_{z^{-1} \lambda}$;
\item $C s_\alpha <_{u,\lambda} C$;
\item if $s > 0$, $C z < C$;
\item $C s_\alpha z = C z s_{z^{-1} \alpha} < C z$. \end{enumerate} Such a choice exists by \ref{lem:decrease_of_length}. Combining the lemmas with the diagram (\ref{eqn:Is_right_coset}), we obtain a commutative diagram \begin{equation}\label{eqn:three_comm_diag}
\begin{tikzcd}[column sep=10ex]
& \mathcal{H}_\Theta \ar[r, "T_{z^{-1}\alpha}"] \ar[d, "(-)|_{z^{-1}\lambda}"'] \ar[dl, "(-)z^{-1}"']
& \mathcal{H}_\Theta \ar[d, "(-)|_{z^{-1}\lambda}"]\\[3ex]
\mathcal{H}_\Theta \ar[d, "(-)|_\lambda"']
& {\displaystyle \bigoplus_{r \in A_{\Theta,z^{-1}\lambda}} \mathcal{H}_{\Theta(r,z^{-1}\lambda)}} \ar[r, "{\bigoplus_{r} T_{z^{-1}\alpha}^{r,z^{-1}\lambda}}"] \ar[dl, "z(-)z^{-1}"']
& {\displaystyle \bigoplus_{r \in A_{\Theta,z^{-1}\lambda}} \mathcal{H}_{\Theta(r,z^{-1}\lambda)}} \ar[dl, "z(-)z^{-1}"]\\
{\displaystyle \bigoplus_{u \in A_{\Theta,\lambda}} \mathcal{H}_{\Theta(u,\lambda)}} \ar[r, "{\bigoplus_u T_\alpha^{u,\lambda}}"]
& {\displaystyle \bigoplus_{u \in A_{\Theta,\lambda}} \mathcal{H}_{\Theta(u,\lambda)}}
\end{tikzcd}. \end{equation}
Since $Cz < C$, the induction assumption applies to $C z$ for $z^{-1} \lambda$. In particular, if we apply \ref{thm:KL_alg}(2) to $C z s_{z^{-1}\alpha} < C z$ and $z^{-1} \lambda$, we obtain the equation \begin{equation}\label{eqn:(4)_(assumption)}
T_{z^{-1} \alpha} \big( \varphi_{z^{-1} \lambda}( C z s_{z^{-1} \alpha}) \big)
= \sum_{\substack{D\in W_\Theta \backslash W_\Theta r W_{z^{-1} \lambda}\\ D \leqslant_{r,z^{-1}\lambda} Cz}} c_D \varphi_{z^{-1} \lambda}(D) \end{equation}
where $r$ is the unique element in $A_{\Theta,z^{-1}\lambda}$ such that $Cz \in W_\Theta \backslash W_\Theta r W_{z^{-1} \lambda}$. We apply to both sides $(-)|_{z^{-1} \lambda}$ followed by $z(-)z^{-1}$.
If we view $\varphi_{z^{-1} \lambda}( C z s_{z^{-1} \alpha})$ as an element in the middle $\mathcal{H}_\Theta$ in the diagram, the left side of (\ref{eqn:(4)_(assumption)}) lands in $\mathcal{H}_{\Theta(u,\lambda)}$ in the bottom middle of the diagram through the rightmost path after applying $(-)|_{z^{-1} \lambda}$ and $z(-)z^{-1}$. Going through the leftmost path instead, this element in $\mathcal{H}_{\Theta(u,\lambda)}$ becomes \begin{equation*}
T_\alpha^{u,\lambda}\big( \varphi_{z^{-1} \lambda}( C z s_{z^{-1} \alpha} )z^{-1}|_\lambda \big). \end{equation*} Rewrite $Cz s_{z^{-1}\alpha} = Cs_\alpha z$ and use \ref{thm:KL_alg}(3) for $C s_\alpha$, the above quantity becomes \begin{equation*}
T_\alpha^{u,\lambda} \big( \varphi_\lambda( C s_\alpha)|_\lambda \big)
= T_\alpha^{u,\lambda} (\psi_{u,\lambda}(C|_\lambda) ). \end{equation*}
Viewing the right side of (\ref{eqn:(4)_(assumption)}) as an element in the middle $\mathcal{H}_\Theta$ in the diagram, $(-)|_{z^{-1} \lambda}$ and $z(-)z^{-1}$ sends it to $\mathcal{H}_{\Theta(u,\lambda)}$ at the bottom-left along the middle path. Going through the leftmost path instead, this element becomes \begin{equation*}
\sum_{\substack{D\in W_\Theta \backslash W_\Theta r W_{z^{-1} \lambda}\\ D \leqslant_{r,z^{-1}\lambda} Cz}} c_D \varphi_\lambda(Dz^{-1})|_\lambda
= \sum_{\substack{D\in W_\Theta \backslash W_\Theta r W_{z^{-1} \lambda}\\ D \leqslant_{r,z^{-1}\lambda} Cz}} c_D \psi_{u,\lambda}( (D z^{-1})|_\lambda). \end{equation*} As in the first part of \textsection\ref{subsec:(1)}, we can rewrite the subscript of the sum. There is an element $w \in W_\Theta$ such that $wr = u z$ by \ref{lem:Is_pres_lowest_db_coset}. Hence \begin{align*}
W_\Theta r W_{z^{-1} \lambda}
&= W_\Theta wr z^{-1} W_\lambda z\\
&= W_\Theta uz z^{-1} W_\lambda z\\
&= W_\Theta u W_\lambda z, \end{align*} and $D \in W_\Theta \backslash W_\Theta r W_{z^{-1} \lambda}$ if and only if $D z^{-1} \in W_\Theta \backslash W_\Theta u W_\lambda$. Moreover, by \ref{lem:Is_right_coset}, \begin{equation*}
D \leqslant_{r,z^{-1}\lambda} Cz \iff D z^{-1} \leqslant_{u,\lambda} C. \end{equation*} Hence (\ref{eqn:(4)_(assumption)}) becomes \begin{equation*}
T_\alpha^{u,\lambda} (\psi_{u,\lambda}(C|_\lambda) )
= \sum_{\substack{ Dz^{-1} \in W_\Theta \backslash W_\Theta u W_\lambda\\ D z^{-1} \leqslant_{u,\lambda} C}} c_D \psi_{u,\lambda}( (D z^{-1})|_\lambda). \end{equation*} Therefore, $\alpha \in \Pi_\lambda$ is such that $C s_\alpha <_{u,\lambda} C$ and equation (\ref{eqn:parabolic_KL_poly}) holds for $C s_\alpha$. By \ref{def:parabolic_KL_poly}, the polynomials $P_{CD}^{u,\lambda}$ are parabolic Kazhdan-Lusztig polynomials for $(W_\lambda,\Pi_\lambda,\Theta(u,\lambda))$. Thus \ref{thm:KL_alg}(4) holds for $C$.
This completes the proof of the algorithm \ref{thm:KL_alg}.
\section{Character formula for irreducible modules}\label{sec:character_formula}
\subsection{Regular case}\label{subsec:character_formula}
By standard arguments, the algorithm \ref{thm:KL_alg} leads to a character formula for irreducible Whittaker modules for regular infinitesimal characters: one takes \ref{thm:KL_alg}(1)(4) for $-\lambda$ dominant regular (so that $\lambda$ is antidominant regular), precomposing with holonomic duality $\mathbb{D}$ (so that standard $\mathcal{D}$-modules $\mathcal{I}$ becomes costandard $\mathcal{D}$-modules $\mathcal{M}$), descending to the Grothendieck group by specializing at $q=-1$, passing through Beilinson-Bernstein localization, and applying the character map.
In more details, let $\lambda \in \mathfrak{h}^*$ be antidominant regular. As explained in \textsection\ref{subsec:geom_prelim}, Beilinson-Bernstein's localization and holonomic duality are (anti-)equivalences of categories which send Whittaker modules to $\mathcal{D}$-modules. Combined with the map $\nu$, we obtain the composition \begin{equation}\label{eqn:flowchart}
\arraycolsep=1.4pt
\begin{array}{ccccccccc}
\mathcal{N}_{\theta,\eta} & \xleftarrow{\Gamma(X,-)} &\Mod_{coh}(\mathcal{D}_\lambda,N,\eta) &\xrightarrow{\mathbb{D}} &\Mod_{coh}(\mathcal{D}_{-\lambda},N,\eta) &\xrightarrow{\nu}
&\mathcal{H}_\Theta &\xrightarrow{(-)|_{-\lambda}}
&{\displaystyle \bigoplus_{u \in A_{\Theta,-\lambda}} \mathcal{H}_{\Theta(u,-\lambda)}},\\
L(w^C\lambda,\eta) &\mapsfrom &\mathcal{L}(w^C,\lambda,\eta) &\mapsto &\mathcal{L}(w^C,-\lambda,\eta) &\mapsto &\varphi_{-\lambda}(C) &\mapsto &\varphi_\lambda(C)|_{-\lambda}\\
M(w^C\lambda,\eta) &\mapsfrom &\mathcal{M}(w^C,\lambda,\eta) &\mapsto &\mathcal{I}(w^C,-\lambda,\eta) &\mapsto &\delta_C &\mapsto &\delta_{C|_{-\lambda}}.
\end{array} \end{equation} At $q=-1$, the coefficients $\chi_q i_{w^D}^! \mathcal{F}$ in the definition of $\nu$ is additive with respect to short exact sequences. Hence $\nu$ factors through the Grothendieck group \begin{equation*}
\nu|_{q=-1}: K\Mod_{coh}(\mathcal{D}_{-\lambda},N,\eta) \xrightarrow{\;\;\;\;\;} \mathcal{H}_\Theta|_{q=-1} \end{equation*} which is an isomorphism by \ref{lem:nu_std}. Therefore we have an isomorphism of abelian groups \begin{equation*}
\arraycolsep=1.4pt
\begin{array}{ccc}
K\mathcal{N}_{\theta,\eta} & \xrightarrow{\cong} & {\displaystyle \bigoplus_{u \in A_{\Theta,-\lambda}} \mathcal{H}_{\Theta(u,-\lambda)}|_{q=-1}}\\
{[L(w^C\lambda,\eta)]} & \mapsto & \varphi_\lambda(C)|_{-\lambda}|_{q=-1}\\
{[M(w^C\lambda,\eta)]} & \mapsto & \delta_{C|_{-\lambda}}|_{q=-1},
\end{array} \end{equation*} where $[-]$ takes the class in the Grothendieck group. Hence Theorem \ref{thm:KL_alg}(1) and (4) imply \begin{equation*}
[L(w^C\lambda,\eta)] =
\sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_{-\lambda}\\ D \leqslant_{u,-\lambda} C}}
P_{CD}^{u,-\lambda}(-1) [M(w^D \lambda,\eta)] \end{equation*} in $K \mathcal{N}_{\theta,\eta}$. Note that $\Sigma_\lambda = \Sigma_{-\lambda}$ as subsets of $\Sigma$ and $W_\lambda = W_{-\lambda}$ as subgroups of $W$. Hence all the combinatoric structures defined based on $\lambda$ and $-\lambda$ are canonically identified. Further applying the character map, we thus obtain
\begin{theorem}[Character formula: Regular case] \label{thm:multiplicity}
Let $\lambda \in \mathfrak{h}^*$ be antidominant and regular. Let $\eta: \mathfrak{n} \to \mathbb{C}$ be any character. For any $C \in W_\Theta \backslash W$, let $u \in A_{\Theta,\lambda}$ be the unique element such that $C \subseteq W_\Theta u W_\lambda$. Then
\begin{equation}
\ch L(w^C\lambda,\eta) =
\sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda\\ D \leqslant_{u,\lambda} C}}
P_{CD}^{u,\lambda}(-1) \ch M(w^D \lambda,\eta),
\end{equation}
where the polynomials $P_{CD}^{u,\lambda}$ are Whittaker Kazhdan-Lusztig polynomials for $(W_\lambda,\Pi_\lambda,\Theta(u,\lambda))$ as defined in \ref{def:parabolic_KL_poly} and $P_{CC}^{u,\lambda} = 1$. \end{theorem}
When $\lambda$ is integral, we have a simpler description, which we state separately.
\begin{corollary}[Character formula: Regular integral case]
Let $\lambda \in \mathfrak{h}^*$ be antidominant, regular, and integral. Let $\eta: \mathfrak{n} \to \mathbb{C}$ be any character. For any $C \in W_\Theta \backslash W$,
\begin{equation*}
\ch L(w^C\lambda,\eta) =
\sum_{\substack{D \in W_\Theta \backslash W\\ D \leqslant C}}
P_{CD}(-1) \ch M(w^D \lambda,\eta),
\end{equation*}
where the polynomials $P_{CD}$ are Whittaker Kazhdan-Lusztig polynomials for $(W,\Pi,\Theta)$ as defined in \ref{def:parabolic_KL_poly_Theta} and $P_{CC}=1$. \end{corollary}
Inverting the matrix $(P_{CD}(-1))_{C,D}$, we recover the description in \cite{Milicic-Soergel:Whittaker_algebraic} and \cite{Romanov:Whittaker} of multiplicities of irreducible Whittaker modules in standard Whittaker modules with antidominant regular integral infinitesimal characters.
At another extreme, when $\eta = 0$ (i.e. $\Theta = \varnothing$), we recover the well-known non-integral Kazhdan-Lusztig conjecture for highest weight modules (see, for example, \cite[Chapter 1]{Lusztig:Char_finite_field}, \cite[\textsection2.5 Theorem 11]{Soergel:V}, \cite[Theorem 0.1]{Kashiwara-Tanisaki:Non-int_KL}).
\begin{corollary}[Kazhdan-Lusztig conjecture for Verma modules]
Let $\lambda \in \mathfrak{h}^*$ be antidominant and regular. For any $w \in W$, let $u \in A_\lambda$ be the unique element so that $w \in u W_\lambda$ (\textsection \ref{subsec:Bruhat_W/Wlambda}). For any $v \in u W_\lambda$, we write $v \leqslant_\lambda w$ if $u^{-1} v = v|_\lambda \leqslant_\lambda w|_\lambda = u^{-1} w$. Then
\begin{equation*}
\ch L(w\lambda) =
\sum_{\substack{v \in u W_\lambda\\ v \leqslant_\lambda w}} P_{wv}^\lambda(-1) \ch M(v\lambda),
\end{equation*}
where the polynomials $P_{wv}^\lambda$ are Kazhdan-Lusztig polynomials for $(W_\lambda,\Pi_\lambda, \varnothing)$ as defined in \ref{def:parabolic_KL_poly}, $P_{ww}=1$, $M(v\lambda)$ is the Verma module of highest weight $v\lambda - \rho$, and $L(w \lambda)$ is the unique irreducible quotient of $M(w \lambda)$ (recall that $\rho$ is the half sum of roots in $\mathfrak{n}$). \end{corollary}
As we have remarked at the end of \textsection \ref{subsec:KL_poly}, our (ordinary) Kazhdan-Lusztig polynomials $P_{wv}$ is related to the ones $P_{v,w}$ defined in \cite{Kazhdan-Lusztig:Hecke_Alg} by $P_{wv}(q) = q^{\ell(w)-\ell(v)} P_{v,w}(q^{-2})$. Therefore, if we write $P_{v,w}^\lambda$ for the polynomials in \cite{Kazhdan-Lusztig:Hecke_Alg} defined for $(W_\lambda,\Pi_\lambda)$, the coefficients $P_{wv}^\lambda(-1)$ in the above corollary then becomes $(-1)^{\ell_\lambda(w) - \ell_\lambda(v)} P_{v,w}^\lambda(1)$, which agrees with the coefficient appearing in \cite[Theorem 0.1]{Kashiwara-Tanisaki:Non-int_KL}.
\begin{remark}\label{rmk:Verma_case_old_proofs}
As is mentioned in the introduction, our argument provides a new proof of the non-integral Kazhdan-Lusztig conjecture for Verma modules. Here let us briefly recall the classical approaches to this conjecture.
After their resolution of the integral Kazhdan-Lusztig conjecture, Beilinson and Bernstein also treated the case where the infinitesimal character is rational (unpublished). They interpreted the multiplicities of irreducible modules in Verma modules topologically in terms of local intersection cohomologies of line bundles over Schubert varieties. Based on this interpretation, Lusztig \cite[Chapter 1]{Lusztig:Char_finite_field} gave explicit formulas for these intersection cohomology groups in positive characteristic. Since these groups can be identified with the corresponding groups in characteristic $0$, Lusztig's formulas resolves the rational Kazhdan-Lusztig conjecture. Once the rational case is treated, the general (regular) case follows by a Zariski density argument.
Soergel gave an alternative proof to the conjecture by showing that the multiplicities we care about only depends on the integral Weyl group $W_\lambda$ \cite[Theorem 11]{Soergel:V}. In particular, the non-integral Kazhdan-Lusztig conjecture is reduced to the integral conjecture for $W_\lambda$. His proof involves the study of coinvariant algebras and Soergel modules, which again goes through the study of intersection cohomology complexes of Schubert varieties.
In comparison, our approach (by using intertwining operators) is uniform for any (possibly non-integral) regular infinitesimal character (in that it does not require first treating the rational case) and is a $\mathcal{D}$-module theoretic argument. This is important for us: localizations of Whittaker modules have irregular singularities and do not correspond to perverse sheaves (an example of such a module is contained in \cite{Milicic-Soergel:Whittaker_geometric} at the end of \textsection 4), so existing methods do not apply to our situation. Moreover, since non-integral intertwining functors are equivalences of categories of all quasi-coherent $\mathcal{D}$-modules (Theorem \ref{lem:non-int_I}), an argument similar to ours should work for other non-integral Kazhdan-Lusztig problems for other categories of Lie algebra representations, provided that the integral situation is already treated and that there is an expected answer for the non-integral case. \end{remark}
\subsection{Singular case}\label{subsec:characer_formula_singular}
The singular case can be deduced from the regular case easily.
Let $\lambda \in \mathfrak{h}^*$ be antidominant and singular. We still have the maps (\ref{eqn:flowchart}), but the exact functor $\Gamma(X,-)$ is no longer an equivalence of categories and only descends to a surjection $K \Mod_{coh}(\mathcal{D}_\lambda,N,\eta) \twoheadrightarrow K \mathcal{N}_{\theta,\eta}$ on Grothendieck groups. However, the identification $\Gamma(X,\mathcal{M}(w^D,\lambda,\eta)) = M(w^D \lambda,\eta)$ still holds. Therefore, the argument for regular case produces the equality \begin{equation}\label{eqn:character_formula_pre}
\ch \Gamma(X,\mathcal{L}(w^C,\lambda,\eta)) =
\sum_{\substack{D \in W_\Theta \backslash W_\Theta u W_\lambda\\%
D \leqslant_{u,\lambda} C}}
P_{CD}^{u,\lambda}(-1) \ch M(w^D \lambda,\eta). \end{equation} However, $\Gamma(X,\mathcal{L}(w^C,\lambda,\eta))$ could be zero, and the standard modules $M(w^D \lambda,\eta)$ could coincide for different $D$. Therefore, it suffices to describe which $M(w^D \lambda,\eta)$ coincide and which $\Gamma(X,\mathcal{L}(w^C,\lambda,\eta))$ are zero.
The first question has an easy answer. Recall from \textsection\ref{subsec:Wh_prelim} that for $C,D \in W_\Theta \backslash W$, $M(w^D \lambda,\eta) = M(w^C \lambda,\eta)$ if and only if $W_\Theta w^D \lambda = W_\Theta w^C \lambda$. Let $W^\lambda$ be the stabilizer of $\lambda$ in $W$. Then the above condition is equivalent to $W_\Theta w^D W^\lambda = W_\Theta w^C W^\lambda$, i.e. that $C$ and $D$ are in the same double $(W_\Theta,W^\lambda)$-coset.
\begin{lemma}\label{lem:std_mods_coincide}
Let $\lambda \in \mathfrak{h}^*$ be antidominant and let $\eta :\mathfrak{n} \to \mathbb{C}$ be a character. The following are equivalent:
\begin{enumerate}[label=(\alph*)]
\item $M(w^C \lambda,\eta) = M(w^D \lambda,\eta)$;
\item $\Gamma(X,\mathcal{M}(w^C,\lambda,\eta)) = \Gamma(X,\mathcal{M}(w^D,\lambda,\eta))$;
\item $C$ and $D$ are in the same double $(W_\Theta,W^\lambda)$-coset.
\end{enumerate} \end{lemma}
Therefore, for a fixed standard Whittaker module $M$, there is a unique double coset $W_\Theta v W^\lambda$ such that $\Gamma(X,\mathcal{M}(w^D,\lambda,\eta)) = M$ for all $D \in W_\Theta \backslash W_\Theta v W^\lambda$.
The following proposition answers the second question.
\begin{proposition}\label{lem:irred_vanishing_singular}
Let $\lambda \in \mathfrak{h}^*$ be antidominant and let $\eta :\mathfrak{n} \to \mathbb{C}$ be a character. Let $v \in W$. Then the set $W_\Theta \backslash W_\Theta v W^\lambda$ of right $W_\Theta$-cosets contains a unique smallest element $C$. Furthermore,
\begin{enumerate}[label=(\alph*)]
\item $\Gamma(X, \mathcal{L}(w^C,\lambda,\eta)) = L(w^C \lambda,\eta) \neq 0$; and
\item $\Gamma(X,\mathcal{L}(w^D,\lambda,\eta)) = 0$ for any $D \in W_\Theta \backslash W_\Theta v W^\lambda$ not equal to $C$.
\end{enumerate} \end{proposition}
In other words, for a fixed standard Whittaker module $M$, among all the costandard $\mathcal{D}_\lambda$-modules that realize $M$, the irreducible quotient of the one with the smallest support realizes the unique irreducible submodule of $M$.
\begin{proof}
Write $M = M(v\lambda,\eta)$ and $L = L(v\lambda,\eta)$.
First, there is one and at most one $D$ in $W_\Theta \backslash W$ with $\Gamma(X,\mathcal{L}(w^D,\lambda,\eta)) = L$. This is because there is a unique irreducible $\mathcal{D}_\lambda$-module $\mathcal{V}$ with $\Gamma(X,\mathcal{V}) = L$ (see \cite[Chapter 3 \textsection 5 Proposition 5.2]{Milicic:Localization}; in fact, $\mathcal{V}$ is the unique irreducible quotient of $\mathcal{D}_\lambda \dotimes_{\mathcal{U}_\theta} L$). By the classification of irreducible twisted Harish-Chandra sheaves, $\mathcal{V}$ is isomorphic to $\mathcal{L}(w^D,\lambda,\eta)$ for a single $D \in W_\Theta \backslash W$.
Since $\mathcal{L}(w^D,\lambda,\eta)$ is the unique irreducible quotient of $\mathcal{M}(w^D,\lambda,\eta)$ and $\Gamma(X,-)$ is exact on $\mathcal{D}_\lambda$-modules, $\Gamma(X,\mathcal{L}(w^D,\lambda,\eta))$ is equal to the unique irreducible quotient $L(w^D\lambda,\eta)$ of $M(w^D\lambda,\eta)$. Therefore, the equality $L(v \lambda,\eta) = \Gamma(X,\mathcal{V}) = \Gamma(X,\mathcal{L}(w^D,\lambda,\eta)) = L(w^D \lambda,\eta)$ implies $M(w^C \lambda,\eta) = M(w^D \lambda,\eta)$ which forces our $D$ to be in the double coset $W_\Theta v W^\lambda$.
It remains to show that such a $D$ is minimum in $W_\Theta \backslash W_\Theta v W^\lambda$. Let $C$ be a minimal element in $W_\Theta \backslash W_\Theta v W^\lambda$. The composition factors of $\mathcal{M}(w^C,\lambda,\eta)$ consist of some $\mathcal{L}(w^E,\lambda,\eta)$'s with $E \leqslant C$. Taking global sections, we see that the composition factors of $M = \Gamma(X,\mathcal{M}(w^C,\lambda,\eta))$ consist of some $\Gamma(X,\mathcal{L}(w^E,\lambda,\eta))$ that are nonzero and with $E \leqslant C$. On the other hand, $L = \Gamma(X,\mathcal{L}(w^D,\lambda,\eta))$ is a composition factor of $M$. Hence $\Gamma(X, \mathcal{L}(w^D,\lambda,\eta)) = \Gamma(X,\mathcal{L}(w^E,\lambda,\eta))$ for some $E \leqslant C$. By the same uniqueness statement appeared in the preceding paragraph, $\mathcal{L}(w^D,\lambda,\eta) = \mathcal{L}(w^E,\lambda,\eta)$ and hence $D = E \leqslant C$. By the minimality of $C$, $D = C$. Thus $C=D$ is the minimum element in $W_\Theta \backslash W_\Theta v W^\lambda$ and $\Gamma(X,\mathcal{L}(w^C,\lambda,\eta)) = L$. \end{proof}
There exists a number $c \in \mathbb{C}$ so that $W^\lambda = W_{c\lambda}$. Hence by \ref{thm:cross-section_db_coset}, the set \begin{equation*}
A_\Theta^\lambda := A_{c\lambda} \cap (w_\Theta {}^\Theta W) \end{equation*} is a cross-section of $W_\Theta \backslash W / W^\lambda$ consisting of the unique shortest elements in each double coset. \ref{lem:irred_vanishing_singular} can be rephrased as follows.
\begin{corollary}\label{lem:irred_vanishing_singular'}
Let $\lambda \in \mathfrak{h}^*$ be antidominant and let $\eta :\mathfrak{n} \to \mathbb{C}$ be a character. Let $C \in W_\Theta \backslash W$. The following are equivalent:
\begin{enumerate}[label=(\alph*)]
\item $C = W_\Theta v$ for some $v \in A_\Theta^\lambda$;
\item $\Gamma(X,\mathcal{L}(w^C,\lambda,\eta)) \neq 0$;
\item $\Gamma(X,\mathcal{L}(w^C,\lambda,\eta)) = L(w^C \lambda,\eta)$.
\end{enumerate} \end{corollary}
Using these observations, we can write down a character formula for general infinitesimal characters.
\begin{theorem}[Character formula: General case] \label{thm:multiplicity_singular}
Let $\lambda \in \mathfrak{h}^*$ be antidominant. Let $\eta: \mathfrak{n} \to \mathbb{C}$ be any character. For any $v \in A_\Theta^\lambda$, let $C = W_\Theta v$, and let $u \in A_{\Theta,\lambda}$ be the unique element such that $C \subseteq W_\Theta u W_\lambda$. Then
\begin{equation}
\ch L(v\lambda,\eta) = \ch L(w^C\lambda,\eta) =
\sum_{z \in A_\Theta^\lambda \cap (W_\Theta u W_\lambda)}
\left(
\sum_{\substack{
D \in W_\Theta \backslash W_\Theta z W^\lambda\\%
D \leqslant_{u,\lambda} C}}
P_{CD}^{u,\lambda}(-1)
\right)
\ch M(z \lambda,\eta),
\end{equation}
where the polynomials $P_{CD}^{u,\lambda}$ are Whittaker Kazhdan-Lusztig polynomials for $(W_\lambda,\Pi_\lambda,\Theta(u,\lambda))$ as defined in \ref{def:parabolic_KL_poly}. As $v$ ranges over $A_\Theta^\lambda$, $L(v \lambda,\eta)$ exhausts all irreducible objects in $\mathcal{N}_{\theta,\eta}$. \end{theorem}
\begin{proof}
The right hand side is obtained by grouping the right side of (\ref{eqn:character_formula_pre}) based on \ref{lem:std_mods_coincide}. In more detail, the cosets $W_\Theta v W^\lambda$ that are contained in $W_\Theta u W_\lambda$ partition $W_\Theta u W_\lambda$, and $A_\Theta^\lambda \cap (W_\Theta u W_\lambda)$ is a cross-section for this partition. We are simply grouping those standard modules within the same $(W_\Theta,W^\lambda)$-cosets together. The left hand side and the last statement (that those $L(v\lambda,\eta)$'s exhaust all irreducibles) follows from \ref{lem:irred_vanishing_singular'} and the exactness of $\Gamma(X,-)$. \end{proof}
\section{An example in $A_3$}\label{sec:examples}
\allowdisplaybreaks The $A_3$ root system (pictured below) is the smallest example in which all nontrivial phenomena appear. To make the picture more readable, only the positive roots are connected to the origin. Here $\lambda$ can be chosen to be $\lambda = -m \rho + c(-\alpha + 2\beta + \gamma)$ for any nonzero number $c$ transcendental over $\mathbb{Q}$ and any large enough integer $m$ so that $\lambda$ is antidominant regular ($-\alpha + 2\beta +\gamma$ is a vector perpendicular to the plane spanned by $\alpha+\beta$ and $\gamma$).
\begin{equation*}
\begin{tikzcd}[start anchor = real center, end anchor = real center, column sep = 2ex, row sep = 1ex]
&[-1.5ex] &[-1.5ex] \phantom{\bullet} \ar[rrrrrr, dotted, no head] \ar[ddll, dotted, no head] \ar[dddddd, dotted, no head] & & & \bullet \ar[loop, phantom, "\beta + \gamma", distance = 1cm] & &[-1.5ex] &[-1.5ex] \phantom{\bullet} \ar[ddll, dotted, no head] \ar[dddddd, dotted, no head]\\
& \bullet \ar[loop, phantom, "\beta", distance = 1cm] & & & & & & \bigodot \ar[loop, phantom, "\alpha + \beta + \gamma", distance = 1cm]\\
\phantom{\bullet} \ar[rrrrrr, dotted, no head] \ar[dddddd, dotted, no head] & & & \bigodot \ar[loop, phantom, "\alpha + \beta", distance = 1cm] & & & \phantom{\bullet} \ar[dddddd, dotted, no head]\\
& & \bullet & & & & & & \bigodot \ar[loop, phantom, "\gamma", distance = 1cm]\\
& & & & \phantom{\bullet} \ar[llluuu, dash, equal] \ar[luu, dash] \ar[ruuuu, dash] \ar[rrruuu, dash] \ar[rrrru, dash, thick] \ar[drr, dash, equal]\\
\bigodot & & & & & & \bullet \ar[loop, phantom, "\alpha", distance = 1cm]\\
& & \phantom{\bullet} \ar[rrrrrr, dotted, no head] \ar[ddll, dotted, no head] & & & \bigodot & & & \phantom{\bullet} \ar[ddll, dotted, no head]\\
& \bigodot & & & & & & \bullet\\
\phantom{\bullet} \ar[rrrrrr, dotted, no head] & & & \bullet & & & \phantom{\bullet}
\end{tikzcd} \end{equation*}
In the above diagram, $\{\alpha,\beta,\gamma\}$ are simple roots, $\Theta = \{\alpha,\beta\}$ which are indicated by double lines in the picture, roots in $\Sigma_\lambda$ are marked by $\bigodot$, and those not in $\Sigma_\lambda$ are marked by $\bullet$.
Below is a diagram of the Weyl group, arranged in a way so that elements in the same right $W_\Theta$-coset are grouped together and are connected by double lines. Elements surrounded by shapes are the longest elements in right $W_\Theta$-cosets. Elements that are crossed out are those in $W_\lambda$. There are two double $(W_\Theta,W_\lambda)$-cosets: elements in $W_\Theta s_\gamma s_\beta W_\lambda$ are underlined; elements in $W_\Theta W_\lambda$ are those that are not underlined.
\begin{equation}\label{diag:W(A3)}
\begin{tikzpicture}
\matrix[column sep = 1ex, row sep = 2ex]{
&[-3ex] \node[ellipse, draw] (abarba) {$w_0$}; &[-3ex] &[-5ex] &[-3ex] &[-2ex] &[-5ex] &[-5ex] &[-2ex] &[-5ex] &[-5ex]\\
\node (abrba) {$s_\alpha s_\beta s_\gamma s_\beta s_\alpha$}; \node[cross out, draw] {\phantom{sss}}; & &
\node (barba) {$s_\beta s_\alpha s_\gamma s_\beta s_\alpha$}; &&&&&&&
\node[rectangle, draw] (abarb) {$\uline{s_\alpha s_\beta s_\alpha s_\gamma s_\beta}$};\\
\node (arba) {$s_\alpha s_\gamma s_\beta s_\alpha$}; \node[cross out, draw] {\phantom{sss}}; & &
\node (brba) {$s_\beta s_\gamma s_\beta s_\alpha$}; & & & &
\node[trapezium, trapezium left angle=70, trapezium right angle=-70, draw] (abar) {$s_\alpha s_\beta s_\alpha s_\gamma$}; \node[cross out, draw] {\phantom{sss}}; & &
\node (abrb) {$\uline{s_\alpha s_\beta s_\gamma s_\beta}$}; & &
\node (barb) {$\uline{s_\beta s_\alpha s_\gamma s_\beta}$};\\
& \node (rba) {$s_\gamma s_\beta s_\alpha$}; & &
\node[rectangle split, rectangle split horizontal, rectangle split parts=3, draw, minimum height=1.2\baselineskip, inner sep=1pt] (aba) {\nodepart{two}$s_\alpha s_\beta s_\alpha$}; \node[cross out, draw] {\phantom{sss}}; & &
\node (abr) {$s_\alpha s_\beta s_\gamma$}; & &
\node (bar) {$s_\beta s_\alpha s_\gamma$}; &
\node (arb) {$\uline{s_\alpha s_\gamma s_\beta}$}; & &
\node (brb) {$\uline{s_\beta s_\gamma s_\beta}$};\\
& & \node (ab) {$s_\alpha s_\beta$}; & &
\node (ba) {$s_\beta s_\alpha$}; &
\node (ar) {$s_\alpha s_\gamma$}; & &
\node (br) {$s_\beta s_\gamma$}; & &
\node (rb) {$\uline{s_\gamma s_\beta}$};\\
& & \node (a) {$s_\alpha$}; & &
\node (b) {$s_\beta$}; & &
\node (r) {$s_\gamma$}; \node[cross out, draw] {\phantom{sss}};\\
& & & \node (1) {$1$}; \node[cross out, draw] {\phantom{sss}};\\
};
\draw [double equal sign distance] (1) to (a);
\draw [double equal sign distance] (1) to (b);
\draw [double equal sign distance] (a) to (ab);
\draw [double equal sign distance] (b) to (ba);
\draw [double equal sign distance] (a) to (ba);
\draw [double equal sign distance] (b) to (ab);
\draw [double equal sign distance] (ab) to (aba);
\draw [double equal sign distance] (ba) to (aba);
\draw [double equal sign distance] (r) to (ar);
\draw [double equal sign distance] (r) to (br);
\draw [double equal sign distance] (ar) to (abr);
\draw [double equal sign distance] (br) to (bar);
\draw [double equal sign distance] (ar) to (bar);
\draw [double equal sign distance] (br) to (abr);
\draw [double equal sign distance] (abr) to (abar);
\draw [double equal sign distance] (bar) to (abar);
\draw [double equal sign distance] (rb) to (arb);
\draw [double equal sign distance] (rb) to (brb);
\draw [double equal sign distance] (arb) to (abrb);
\draw [double equal sign distance] (brb) to (barb);
\draw [double equal sign distance] (arb) to (barb);
\draw [double equal sign distance] (brb) to (abrb);
\draw [double equal sign distance] (abrb) to (abarb);
\draw [double equal sign distance] (barb) to (abarb);
\draw [double equal sign distance] (rba) to (arba);
\draw [double equal sign distance] (rba) to (brba);
\draw [double equal sign distance] (arba) to (abrba);
\draw [double equal sign distance] (brba) to (barba);
\draw [double equal sign distance] (arba) to (barba);
\draw [double equal sign distance] (brba) to (abrba);
\draw [double equal sign distance] (abrba) to (abarba);
\draw [double equal sign distance] (barba) to (abarba);
\end{tikzpicture} \end{equation}
Let's first look at the double coset $W_\Theta s_\gamma s_\beta W_\lambda$, with $u = s_\gamma s_\beta \in A_{\Theta,\lambda}$. It equals a single left $W_\lambda$-coset $s_\gamma s_\beta W_\lambda$ and a single right $W_\Theta$-coset $W_\Theta s_\gamma s_\beta$. Hence \begin{equation*}
\Theta(s_\gamma s_\beta,\lambda) = \Pi_\lambda,\quad
W_{\lambda,\Theta(s_\gamma s_\beta,\lambda)} = W_\lambda, \end{equation*}
and $(W_\Theta s_\gamma s_\beta)|_\lambda = W_\lambda 1$, the unique right $W_\lambda$-coset in $W_\lambda$. Therefore \begin{align*}
\varphi_\lambda(W_\Theta s_\gamma s_\beta) &= \delta_{W_\Theta s_\gamma s_\beta},\\
\ch L(s_\gamma s_\beta \lambda,\eta) &= \ch M(s_\gamma s_\beta \lambda,\eta). \end{align*}
Now let's look at the other double coset $W_\Theta W_\lambda$, with $u = 1$ and \begin{equation*}
\Theta(1,\lambda) = \{\alpha+\beta\}, \quad
W_{\lambda, \Theta(1,\lambda)} = \{1, s_{\alpha+\beta}\}. \end{equation*} For convenience, we write \begin{equation*}
W_\bullet := W_{\lambda, \Theta(1,\lambda)}. \end{equation*} The root system $\Sigma_\lambda$ and a diagram for $(W_\lambda,\Pi_\lambda,\Theta_\lambda^1)$ are \begin{equation*}
\begin{tikzcd}[start anchor = real center, end anchor = real center, column sep = 1ex, row sep = 5ex]
& \bigodot \ar[dr, equal] \ar[loop, phantom, "\alpha+\beta", distance = 1cm] & & \bigodot \ar[loop, phantom, "\alpha+\beta+\gamma", distance = 1cm] \ar[dl, dash]\\
\bigodot \ar[rr, dash] && \phantom{\bigodot} \ar[dl, dash] \ar[dr, dash] \ar[rr, dash] && \bigodot \ar[loop, phantom, "\gamma", distance = 1cm]\\
& \bigodot & & \bigodot
\end{tikzcd}\qquad
\begin{adjustbox}{trim = 0 2.3cm 0 0}
\begin{tikzpicture}
\matrix[row sep = 1cm,column sep = 0.2cm]{
& \node[ellipse, draw] (abr) {$s_{\alpha+\beta+\gamma}$};\\[-4ex]
\node[trapezium, trapezium left angle=70, trapezium right angle=-70, draw] (ab-r) {$s_{\alpha+\beta} s_\gamma$}; &&
\node (r-ab) {$s_\gamma s_{\alpha+\beta}$};\\
\node[rectangle split, rectangle split horizontal, rectangle split parts=3, draw, minimum height=1.2\baselineskip, inner sep=1pt] (ab) {\nodepart{two}$s_{\alpha+\beta}$}; &&
\node (r) {$s_\gamma$};\\[-3ex]
& \node (1) {$1$};\\
};
\draw [double equal sign distance] (abr) to (r-ab) ;
\draw [double equal sign distance] (ab-r) to (r) ;
\draw [double equal sign distance] (ab) to (1) ;
\draw (abr) to (ab-r) ;
\draw (ab-r) to (ab);
\end{tikzpicture}.
\end{adjustbox} \end{equation*}
The map $(-)|_\lambda$ restricted to $W_\Theta \backslash W_\Theta W_\lambda$ can be visualized as
\begin{equation}\label{diag:(-)|_lambda_A3}
\begin{tikzpicture}
\matrix[row sep = 1cm,column sep = 0.2cm]{
& \node[ellipse, draw] (abr) {$s_{\alpha+\beta+\gamma}$};\\[-4ex]
\node[trapezium, trapezium left angle=70, trapezium right angle=-70, draw] (ab-r) {$s_{\alpha+\beta} s_\gamma$}; &&
\node (r-ab) {$s_\gamma s_{\alpha+\beta}$};\\
\node[rectangle split, rectangle split horizontal, rectangle split parts=3, draw, minimum height=1.2\baselineskip, inner sep=1pt] (ab) {\nodepart{two}$s_{\alpha+\beta}$}; &&
\node (r) {$s_\gamma$};\\[-3ex]
& \node (1) {$1$};\\
};
\draw [double equal sign distance] (abr) to (r-ab) ;
\draw [double equal sign distance] (ab-r) to (r) ;
\draw [double equal sign distance] (ab) to (1) ;
\draw (abr) to (ab-r) ;
\draw (ab-r) to (ab);
\end{tikzpicture}
\raisebox{10ex}{$\xleftarrow{\quad (-)|_\lambda \quad}$}
\begin{adjustbox}{trim = 0 1cm 0 0}
\begin{tikzpicture}
\matrix[row sep = 1cm,column sep = 0.2cm]{
&[-5ex] \node[ellipse, draw] (abarba) {$W_\Theta w_0$};\\[-2ex]
\node[trapezium, trapezium left angle=70, trapezium right angle=-70, draw] (abar) {$W_\Theta s_\alpha s_\beta s_\alpha s_\gamma$};\\
\node[rectangle split, rectangle split horizontal, rectangle split parts=3, draw, minimum height=1.2\baselineskip, inner sep=1pt] (aba) {\nodepart{two}$W_\Theta s_\alpha s_\beta s_\alpha$};\\
\node {};\\
};
\end{tikzpicture}
\end{adjustbox} \end{equation} where a coset on the right hand side is sent to the coset on the left with the same shape. The Whittaker Kazhdan-Lusztig polynomials for $(W_\lambda,\Pi_\lambda,\Theta_\lambda^1)$ are \begin{equation*}
\begin{tabu}{c|ccc}
P_{EF}^{1,\lambda} & W_\bullet s_{\alpha+\beta} & W_\bullet s_{\alpha+\beta} s_\gamma & W_\bullet s_{\alpha+\beta+\gamma}\\ \hline
W_\bullet s_{\alpha+\beta} & 1 & 0 & 0\\
W_\bullet s_{\alpha+\beta} s_\gamma & q & 1 & 0\\
W_\bullet s_{\alpha+\beta+\gamma} & 0 & q & 1
\end{tabu} \end{equation*} Hence \begin{align*}
\varphi_\lambda(W_\Theta s_\alpha s_\beta s_\alpha)
&= P_{(W_\bullet s_{\alpha+\beta}), (W_\bullet s_{\alpha+\beta})}^{1,\lambda} \delta_{W_\Theta s_\alpha s_\beta s_\alpha}\\
&\qquad + P_{(W_\bullet s_{\alpha+\beta}), (W_\bullet s_{\alpha+\beta} s_\gamma)}^{1,\lambda} \delta_{W_\Theta s_\alpha s_\beta s_\alpha s_\gamma}\\
&\qquad + P_{(W_\bullet s_{\alpha+\beta}), (W_\bullet s_{\alpha+\beta+\gamma})}^{1,\lambda} \delta_{W_\Theta w_0}\\
&= \delta_{W_\Theta s_\alpha s_\beta s_\alpha},\\
\varphi_\lambda(W_\Theta s_\alpha s_\beta s_\alpha s_\gamma)
&= P_{(W_\bullet s_{\alpha+\beta} s_\gamma), (W_\bullet s_{\alpha+\beta})}^{1,\lambda} \delta_{W_\Theta s_\alpha s_\beta s_\alpha}\\
&\qquad + P_{(W_\bullet s_{\alpha+\beta} s_\gamma), (W_\bullet s_{\alpha+\beta} s_\gamma)}^{1,\lambda} \delta_{W_\Theta s_\alpha s_\beta s_\alpha s_\gamma}\\
&\qquad + P_{(W_\bullet s_{\alpha+\beta} s_\gamma), (W_\bullet s_{\alpha+\beta+\gamma})}^{1,\lambda} \delta_{W_\Theta w_0}\\
&= q \delta_{W_\Theta s_\alpha s_\beta s_\alpha} + \delta_{W_\Theta s_\alpha s_\beta s_\alpha s_\gamma},\\
\varphi_\lambda(W_\Theta w_0)
&= P_{(W_\bullet s_{\alpha+\beta+\gamma}), (W_\bullet s_{\alpha+\beta})}^{1,\lambda} \delta_{W_\Theta s_\alpha s_\beta s_\alpha}\\
&\qquad + P_{(W_\bullet s_{\alpha+\beta+\gamma}), (W_\bullet s_{\alpha+\beta} s_\gamma)}^{1,\lambda} \delta_{W_\Theta s_\alpha s_\beta s_\alpha s_\gamma}\\
&\qquad + P_{(W_\bullet s_{\alpha+\beta+\gamma}), (W_\bullet s_{\alpha+\beta+\gamma})}^{1,\lambda} \delta_{W_\Theta w_0}\\
&= q \delta_{W_\Theta s_\alpha s_\beta s_\alpha s_\gamma} + \delta_{W_\Theta w_0}. \end{align*} Specializing to $q = -1$, we get \begin{align*}
\ch L(s_\alpha s_\beta s_\alpha \lambda,\eta)
&= \ch M(s_\alpha s_\beta s_\alpha \lambda,\eta),\\
\ch L( s_\alpha s_\beta s_\alpha s_\gamma \lambda,\eta)
&= -\ch M(s_\alpha s_\beta s_\alpha \lambda,\eta) + \ch M( s_\alpha s_\beta s_\alpha s_\gamma \lambda,\eta),\\
\ch L(w_0 \lambda,\eta)
&= -\ch M( s_\alpha s_\beta s_\alpha s_\gamma \lambda,\eta) + \ch M(w_0 \lambda,\eta). \end{align*}
\printbibliography
\end{document} | arXiv |
\begin{document}
\title{A probabilistic approach to some binomial identities}
\author[]{Christophe Vignat} \address{Information Theory Laboratory, E.P.F.L., 1015 Lausanne, Switzerland} \email{[email protected]}
\author[]{Victor H. Moll} \address{Department of Mathematics, Tulane University, New Orleans, LA 70118} \email{[email protected]}
\subjclass{Primary 05A10, Secondary 33B15, 60C99}
\date{\today}
\keywords{binomial sums, gamma distributed random variables, Vandermonde identity, orthogonal polynomials}
\begin{abstract} Classical binomial identities are established by giving probabilistic interpretations to the summands. The examples include Vandermonde identity and some generalizations. \end{abstract}
\maketitle
\vskip 20pt
\section{Introduction} \label{S:intro}
The evaluation of finite sums involving binomial coefficients appears throughout the undergraduate curriculum. Students are often exposed to the identity \begin{equation} \sum_{k=0}^{n} \binom{n}{k} = 2^{n}. \label{two-bin} \end{equation} \noindent Elementary proofs abound: simply choose $x=y=1$ in the binomial expansion of $(x+y)^{n}$. The reader is surely aware of many other proofs, including some combinatorial in nature.
At the end of the previous century, the evaluation of these sums was trivialized by the work of H. Wilf and D. Zeilberger \cite{aequalsb}. In the preface to the charming book \cite{aequalsb}, the authors begin with the phrase \begin{center} \texttt{You've been up all night working on your new theory, you found the answer, and it is in the form that involves factorials, binomial coefficients, and so on, ...} \end{center} \noindent and then proceed to introduce the method of {\em creative telescoping}. This technique provides an automatic tool for the verification of this type of identities.
Even in the presence of a powerful technique, such as the WZ-method, it is often a good pedagogical idea to present a simple identity from many different points of view. The reader will find in \cite{amdeberhan-2012a} this approach with the example \begin{equation} \sum_{k=0}^{m} 2^{-2k} \binom{2k}{k} \binom{2m-k}{m} = \sum_{k=0}^{m} 2^{-2k} \binom{2k}{k} \binom{2m+1}{2k}. \label{pretty-1} \end{equation}
The current paper presents probabilistic arguments for the evaluation of certain binomial sums. The background required is minimal. The continuous random variables $X$ considered here have a probability density function. This is a nonnegative function $f_{X}(x)$, such that \begin{equation} \Pr(X< x) = \int_{-\infty}^{x} f_{X}(y) \, dy. \end{equation} \noindent In particular, $f_{X}$ must have total mass $1$. Thus, all computations are reduced to the evaluation of integrals. For instance, the expectation of a function of the random variable $X$ is computed as \begin{equation} \mathbb{E} g(X) = \int_{-\infty}^{\infty} g(y) f_{X}(y) \, dy. \end{equation} \noindent In elementary courses, the reader has been exposed to normal random variables, written as $X \sim N(0,1)$, with density \begin{equation} f_{X}(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^{2}/2}, \end{equation} and exponential random variables, with probability density function \begin{equation} f(x;\lambda) = \begin{cases}
\lambda e^{- \lambda x} & \text{ for } x \geq 0; \\
0 & \text{ otherwise.} \end{cases} \end{equation}
The examples employed in the arguments presented here have a gamma distribution with shape parameter $k$ and scale parameter $\theta$, written as $X \sim \Gamma(k, \theta)$. These are defined by the density function \begin{equation} f(x;k, \theta) = \begin{cases} x^{k-1} e^{-x/\theta}/\theta^{k} \Gamma(k), & \quad \text{ for } x \geq 0; \\ 0 & \quad \text{ otherwise}. \end{cases} \end{equation} \noindent Here $\Gamma(s)$ is the classical gamma function, defined by \begin{equation} \Gamma(s) = \int_{0}^{\infty} x^{s-1}e^{-x} \, dx \end{equation} \noindent for $\mathop{\rm Re}\nolimits{s} > 0$. Observe that if $X \sim \Gamma(a,\theta)$, then $X = \theta Y$ where $Y \sim \Gamma(a,1)$. Moreover $\mathbb{E} X^{n} = \theta^{n} (a)_{n}$, where \begin{equation} (a)_{n} = \frac{\Gamma(a+n)}{\Gamma(a)} = a(a+1) \cdots (a+n-1) \end{equation} \noindent is the Pochhammer symbol. The main property of these random variables employed in this paper is the following: assume $X_{i} \sim \Gamma(k_{i},\theta)$ are independent, then \begin{equation} X_{1} + \cdots + X_{n} \sim \Gamma(k_{1} + \cdots + k_{n}, \theta). \end{equation} This follows from the fact that that the density probability function for the sum of two independent random variables is the convolution of the individual ones.
Related random variables include those with a beta distribution \begin{equation} f_{a,b}(x) = \begin{cases} x^{a-1}(1-x)^{b-1}/B(a,b) & \quad \text{ for } 0 \leq x \leq 1; \\ 0 & \quad \text{ otherwise}. \end{cases} \end{equation} \noindent Here $B(a,b)$ is the beta function defined by \begin{equation} B(a,b) = \int_{0}^{1} x^{a-1}(1-x)^{b-1} \, dx \end{equation} \noindent and also the symmetric beta distributed random variable $Z_{c}$, with density proportional to $(1-x^{2})^{c-1}$ for $-1 \leq x \leq 1.$ The first class of random variables can be generated as \begin{equation} B_{a,b} = \frac{\Gamma_{a}}{\Gamma_{a} + \Gamma_{b}}, \label{fun-1} \end{equation} \noindent where $\Gamma_{a}$ and $\Gamma_{b}$ are independent gamma distributed with shape parameters $a$ and $b$, respectively and the second type is distributed as $1 - 2B_{c,c}$, that is, \begin{equation} Z_{c} = 1 - \frac{2 \Gamma_{c}}{\Gamma_{c} + \Gamma'_{c}} = \frac{\Gamma_{c} - \Gamma'_{c}}{\Gamma_{c} + \Gamma'_{c}}, \label{fun-2} \end{equation} \noindent where $\Gamma_{c}$ and $\Gamma'_{c}$ are independent gamma distributed with shape parameter $c$. A well-known result is that $B_{a,b}$ and $\Gamma_{a}+\Gamma_{b}$ are independent in \eqref{fun-1}; similarly, $\Gamma_{c} + \Gamma'_{c}$ and $Z_{c}$ are independent in \eqref{fun-2}.
\section{A sum involving central binomial coefficients} \label{S:bincoeff}
Many finite sums may be evaluated via the generating function of terms appearing in them. For instance, a sum of the form \begin{equation} S_{2}(n) = \sum_{i+j=n} a_{i}a_{j} \end{equation} \noindent is recognized as the coefficient of $x^{n}$ in the expansion of $f(x)^{2}$, where \begin{equation} f(x) = \sum_{j=0}^{\infty} a_{j}x^{j} \end{equation} \noindent is the generating function of the sequence $\{ a_{i} \}$. Similarly, \begin{equation} S_{m}(n) = \sum_{k_{1} + \cdots + k_{m} =n} a_{k_{1}} \cdots a_{k_{m}} \end{equation} \noindent is given by the coefficient of $x^{n}$ in $f(x)^{m}$. The classical example \begin{equation} \frac{1}{\sqrt{1-4x}} = \sum_{j=0}^{\infty} \binom{2j}{j} x^{j} \label{bin-exp1} \end{equation} \noindent gives the sums \begin{equation} \sum_{i=0}^{n} \binom{2i}{i} \binom{2n-2i}{n-i} = 4^{n} \label{mult-identity-2} \end{equation} \noindent and \begin{equation} \label{mult-identity} \sum_{k_{1}+\cdots + k_{m} = n} \binom{2k_{1}}{k_{1}} \cdots \binom{2k_{m}}{k_{m}} = \frac{2^{2n}}{n!} \frac{\Gamma( \tfrac{m}{2} + n )}{\Gamma(\tfrac{m}{2})}. \end{equation} \noindent The powers of $(1-4x)^{-1/2}$ are obtained from the binomial expansion \begin{equation} (1 - 4x)^{-a} = \sum_{j=0}^{\infty} \frac{(a)_{j}}{j!}(4x)^{j}, \end{equation} \noindent where $(a)_{j}$ is the Pochhammer symbol.
The identity \eqref{mult-identity-2} is elementary and there are many proofs in the literature. A nice combinatorial proof of \eqref{mult-identity} appeared in $2006$ in this journal \cite{valerio-2006a}. In a more recent contribution, G. Chang and C. Xu \cite{chang-xu-2011a} present a probabilistic proof of these identities. Their approach is elementary: take $m$ independent Gamma random variables $X_{i} \sim \Gamma(\tfrac{1}{2},1)$ and write \begin{equation} \mathbb{E} \left( \sum_{i=1}^{m} X_{i} \right)^{n} = \sum_{k_{1}+\cdots+k_{m} = n} \binom{n}{k_{1}, \cdots, k_{m}} \mathbb{E}X_{1}^{k_{1}} \cdots \mathbb{E}X_{m}^{k_{m}} \label{identity-0} \end{equation} where $\mathbb{E}$ denotes the expectation operator. For each random variable $X_{i}$, the moments are given by \begin{equation} \mathbb{E} X_{i}^{k_{i}} = \frac{\Gamma( k_{i} + \tfrac{1}{2})}{\Gamma( \tfrac{1}{2} )} = 2^{-2k_{i}} \frac{(2k_{i})!}{k_{i}!} = \frac{k_{i}!}{2^{2k_{i}}} \binom{2k_{i}}{k_{i}}, \label{moments} \end{equation} \noindent using Euler's duplication formula for the gamma function \begin{equation} \Gamma(2z) = \frac{1}{\sqrt{\pi}} 2^{2z-1} \Gamma(z) \Gamma(z+ \tfrac{1}{2}) \end{equation} (see \cite{nist}, $5.5.5$) to obtain the second form. The expression \begin{equation} \binom{n}{k_{1}, \cdots, k_{m}} = \frac{n!}{k_{1}! \, k_{2}! \, \cdots \, k_{m}!} \end{equation} \noindent for the multinomial coefficients shows that the right-hand side of \eqref{identity-0} is \begin{equation} \frac{n!}{2^{2n}} \sum_{k_{1} + \cdots + k_{m}=n} \binom{2k_{1}}{k_{1}} \cdots \binom{2k_{m}}{k_{m}}. \end{equation} \noindent To evaluate the left-hand side of \eqref{identity-0}, recall that the sum of $m$ independent $\Gamma \left(\tfrac{1}{2},1 \right)$ has a distribution of $\Gamma(\tfrac{m}{2},1)$. Therefore, the left-hand side of \eqref{identity-0} is \begin{equation} \frac{\Gamma( \tfrac{m}{2} + n )}{\Gamma( \tfrac{m}{2} )}. \end{equation} This gives \eqref{mult-identity}. The special case $m=2$ produces \eqref{mult-identity-2}.
\section{More sums involving central binomial coefficients} \label{S:second}
The next example deals with the identity \begin{equation} \sum_{k=0}^{n} \binom{4k}{2k} \binom{4n-4k}{2n-2k} = 2^{4n-1} + 2^{2n-1} \binom{2n}{n} \label{sum-000} \end{equation} \noindent that appears as entry $4.2.5.74$ in \cite{brychkov}. The proof presented here employs the famous dissection technique, first introduced by Simpson \cite{simpson-1759} in the simplification of \begin{equation} \frac{1}{2} \left( \mathbb{E}(X_{1}+X_{2})^{2n} + \mathbb{E}(X_{1}-X_{2})^{2n} \right), \end{equation} \noindent where $X_{1}, \, X_{2}$ are independent random variables distributed as $\Gamma \left( \tfrac{1}{2}, 1 \right)$.
The left-hand side is evaluated by expanding the binomials to obtain \begin{multline} \frac{1}{2} ( \mathbb{E}(X_{1}+X_{2})^{2n} + \mathbb{E}(X_{1}-X_{2})^{2n} ) = \\ \frac{1}{2}\sum_{k=0}^{2n} \binom{2n}{k} \mathbb{E} X_{1}^{k} \, \mathbb{E} X_{2}^{2n-k} + \frac{1}{2} \sum_{k=0}^{2n} (-1)^{k} \binom{2n}{k} \mathbb{E} X_{1}^{k} \, \mathbb{E} X_{2}^{2n-k} \nonumber \end{multline} \noindent This gives \begin{equation} \frac{1}{2} ( \mathbb{E}(X_{1}+X_{2})^{2n} + \mathbb{E}(X_{1}-X_{2})^{2n} ) = \sum_{k=0}^{n} \binom{2n}{2k} \mathbb{E} X_{1}^{2k} \, \mathbb{E} X_{2}^{2n-2k}. \nonumber \label{nice-sum1} \end{equation} Using \eqref{moments}, this reduces to \begin{equation} \frac{1}{2} \left( \mathbb{E}(X_{1}+X_{2})^{2n} + \mathbb{E}(X_{1}-X_{2})^{2n} \right) = \frac{(2n)!}{2^{4n}} \sum_{k=0}^{n} \binom{4k}{2k} \binom{4n-4k}{2n-2k}. \label{nice-sum1a} \end{equation}
The random variable $X_{1}+X_{2}$ is $\Gamma(1,1)$ distributed, so \begin{equation} \mathbb{E} (X_{1}+X_{2})^{2n} = (2n)!, \end{equation} \noindent and the random variable $X_{1}-X_{2}$ is distributed as $(X_{1}+X_{2})Z_{1/2}$, where $Z_{1/2}$ is independent of $X_{1}+X_{2}$ and has a symmetric beta distribution with density $f_{Z_{1/2}}(z) = 1/\pi \, \sqrt{1-z^{2}}$. In particular, the even moments are given by \begin{equation} \label{even-mom} \frac{1}{\pi} \int_{-1}^{1} \frac{z^{2n} \, dz}{\sqrt{1-z^{2}}} = \frac{1}{2^{2n}} \binom{2n}{n}. \end{equation} \noindent Therefore, \begin{equation} \mathbb{E} (X_{1}-X_{2})^{2n} = \mathbb{E} (X_{1}+X_{2})^{2n} \, \mathbb{E} Z^{2n}_{1/2} = \frac{(2n)!}{2^{2n}} \binom{2n}{n}. \end{equation} \noindent It follows that \begin{equation} \mathbb{E} (X_{1}+X_{2})^{2n} + \mathbb{E} (X_{1}-X_{2})^{2n} = (2n)! + \frac{(2n)!}{2^{2n}} \binom{2n}{n}. \label{nice-sum2} \end{equation}
The evaluations \eqref{nice-sum1} and \eqref{nice-sum2} imply \eqref{sum-000}.
\section{An extension related to Legendre polynomials} \label{S:extension}
A key point in the evaluation given in the previous section is the elementary identity \begin{equation} \label{reduc-1} 1 + (-1)^{k} = \begin{cases}
2 & \text{ if } k \text{ is even}; \\
0 & \text{ otherwise. } \end{cases} \end{equation} \noindent This reduces the number of terms in the sum \eqref{nice-sum1} from $2n$ to $n$. A similar cancellation occurs for any $p \in \mathbb{N}$. Indeed, the natural extension of \eqref{reduc-1} is given by \begin{equation} \label{reduc-2} \sum_{j=0}^{p-1} \omega^{jr} = \begin{cases} p & \text{ if } r \equiv 0 \pmod p; \\ 0 & \text{ otherwise}; \end{cases} \end{equation} \noindent Here $\omega = e^{2 \pi i /p}$ is a complex $p$-th root of unity. Observe that \eqref{reduc-2} reduces to \eqref{reduc-1} when $p=2$.
The goal of this section is to discuss the extension of \eqref{sum-000}. The main result is given in the next theorem. The Legendre polynomials appearing in the next theorem are defined by \begin{equation} P_{n}(x) = \frac{1}{2^{n} \, n!} \left( \frac{d}{dx} \right)^{n} (x^{2}-1)^{n}. \label{legen-def} \end{equation}
\begin{theorem} \label{thm-leg} Let $n, \, p$ be positive integers. Then \begin{equation} \sum_{k=0}^{n} \binom{2kp}{kp} \binom{2(n-k)p}{(n-k)p} = \frac{2^{2np}}{p} \sum_{\ell = 0}^{p-1} e^{i \pi \ell n} P_{np} \left( \cos \left( \frac{\pi \ell}{p} \right) \right). \end{equation} \end{theorem} \begin{proof} Replace the random variable $X_{1} - X_{2}$ considered in the previous section, by $X_{1} + WX_{2}$, where $W$ is a complex random variable with uniform distribution among the $p$-th roots of unity. That is, \begin{equation} \text{Pr} \left\{ W = \omega^{\ell} \right\} = \frac{1}{p}, \quad \text{ for } 0 \leq \ell \leq p-1. \end{equation} \noindent The identity \eqref{reduc-2} gives \begin{equation} \mathbb{E} W^{r} = \begin{cases} 1 & \text{ if } r \equiv 0 \pmod p; \\ 0 & \text{ otherwise.} \end{cases} \end{equation} \noindent This is the cancellation alluded above.
Now proceed as in the previous section to obtain the moments \begin{eqnarray} \mathbb{E}(X_{1} + W X_{2})^{np} & = & \sum_{k=0}^{n} \binom{np}{kp} \mathbb{E} X_{1}^{(n-k)p} \, \mathbb{E} X_{2}^{kp} \label{sum-3}\\
& = & \frac{(np)!}{2^{2np}} \sum_{k=0}^{n} \binom{2kp}{kp} \binom{2(n-k)p}{(n-k)p}. \nonumber \end{eqnarray}
A second expression for $\mathbb{E}(X_{1} + W X_{2})^{np}$ employs an alternative form of the Legendre polynomial $P_{n}(x)$ defined in \eqref{legen-def}.
\begin{prop} \label{legen-1} The Legendre polynomial is given by \begin{equation} P_{n}(x) = \frac{1}{n!} \mathbb{E} \left[ (x + \sqrt{x^{2}-1}) X_{1} + (x - \sqrt{x^{2}-1}) X_{2} \right]^{n}, \end{equation} where $X_{1}$ and $X_{2}$ are independent $\Gamma\left(\tfrac{1}{2},1\right)$ random variables. \end{prop} \begin{proof} The proof is based on characteristic functions. Compute the sum \begin{multline} \label{charac-1} \mathbb{E} e^{t (x + \sqrt{x^{2}-1}) \, X_{1}} \, \mathbb{E} e^{t (x - \sqrt{x^{2}-1}) \, X_{2}} = \\ \sum_{k=0}^{\infty} \frac{t^{n}}{n!} \, \mathbb{E} \left[ (x + \sqrt{x^{2}-1} ) \, X_{1} + (x - \sqrt{x^{2}-1}) \, X_{2} \right]. \end{multline} The moment generating function for a $\Gamma \left( \tfrac{1}{2}, 1 \right)$ random variable is \begin{equation} \mathbb{E} e^{t X} = ( 1 - t)^{-1/2}. \end{equation} \noindent This reduces \eqref{charac-1} to \begin{equation*} \left( 1 - t ( x + \sqrt{x^{2}-1}) \right)^{-1/2} \left( 1 - t ( x - \sqrt{x^{2}-1}) \right)^{-1/2} = (1 - 2tx + t^{2})^{-1/2} \end{equation*} \noindent which is the generating function of the Legendre polynomials. \end{proof}
\noindent This concludes the proof of Theorem \ref{thm-leg}. \end{proof}
\begin{corollary} Let $x$ be a variable and $\Gamma_{1}, \, \Gamma_{2}$ as before. Then \begin{equation} \mathbb{E} (\Gamma_{1} + x^{2} \Gamma_{2})^{n} = n!x^{n} P_{n} \left( \tfrac{1}{2}(x + x^{-1} \right). \label{jou-1} \end{equation} \end{corollary} \begin{proof} This result follows from Proposition \ref{legen-1} and the change of variables $x \mapsto \tfrac{1}{2}(x+x^{-1})$, known as the Joukowsky transform. \end{proof}
Replacing $x$ by $W^{1/2}$ in \eqref{jou-1} and averaging over the values of $W$ gives the second expression for $\mathbb{E}( X_{1} + W X_{2})^{np}$. The proof of Theorem \ref{thm-leg} is complete.
\section{Chu-vandermonde} \label{S:chu}
The arguments presented here to prove \eqref{mult-identity-2} can be generalized by replacing the random variables $\Gamma \left( \tfrac{1}{2},1 \right)$ by two random variables $\Gamma(a_{i},1)$ with shape parameters $a_{1}$ and $a_{2}$, respectively. The resulting identity is the Chu-Vandermonde theorem.
\begin{theorem} Let $a_{1}$ and $a_{2}$ be positive real numbers. Then \begin{equation} \sum_{k=0}^{n} \frac{(a_{1})_{k}}{k!} \, \frac{(a_{2})_{n-k}}{(n-k)!} = \frac{(a_{1}+a_{2})_{n}}{n!}. \end{equation} \end{theorem}
The reader will find in \cite{andrews3} a more traditional proof. The paper \cite{zeilberger-1995} describes how to find and prove this identity in automatic form.
Exactly the same argument for \eqref{mult-identity} provides a multivariable generalization of the Chu-Vandermonde identity.
\begin{theorem} Let $\{ a_{i} \}_{1 \leq i \leq m}$ be a collection of $m$ positive real numbers. Then \begin{equation} \sum_{k_{1}+\cdots+k_{m} = n} \frac{(a_{1})_{k_{1}}}{k_{1}!} \cdots \frac{(a_{m})_{k_{m}}}{k_{m}!} = \frac{1}{n!} (a_{1} + \cdots + a_{m})_{n}. \end{equation} \end{theorem}
The final stated result presents a generalization of Theorem \ref{thm-leg}.
\begin{theorem} Let $n, \, p \in \mathbb{N}, a \in \mathbb{R}^{+}$ and $\omega = e^{i \pi/p}$. Then \begin{equation} \label{gegen-1} \sum_{k=0}^{n} \frac{(a)_{kp}}{(kp)!} \, \frac{(a)_{(n-k)p}}{((n-k)p)!} z^{2kp} = \frac{1}{p} \sum_{\ell=0}^{p-1} e^{i \pi \ell n} z^{np} C_{np}^{(a)} \left( \tfrac{1}{2}( z \omega^{\ell} + z^{-1} \omega^{-\ell} ) \right). \end{equation} \noindent Here $C_{n}^{(a)}(x)$ is the Gegenbauer polynomial of degree $n$ and parameter $a$. \end{theorem} \begin{proof} Start with the moment representation for the Gegenbauer polynomials \begin{equation} \label{mom-geg} C_{n}^{(a)}(x) = \frac{1}{n!} \mathbb{E}_{U,V} \left( U (x+\sqrt{x^{2}-1}) + V ( x - \sqrt{x^{2}-1}) \right)^{n} \end{equation} \noindent with $U$ and $V$ independent $\Gamma(a,1)$ random variables. This representation is proved in the same way as the proof for the Legendre polynomial, replacing the exponent $-1/2$ by and exponent $-a$. Note that the Legendre polynomials are Gegenbauer polynomials with parameter $a = \tfrac{1}{2}$. This result can also be found in Theorem 3 of \cite{sun-p-2007a}. \end{proof}
\begin{note} The value $z=1$ in \eqref{gegen-1} gives \begin{equation} \sum_{k=0}^{n} \frac{(a)_{kp}}{(kp)!} \, \frac{(a)_{(n-k)p}}{((n-k)p)!} = \frac{1}{p} \sum_{\ell=0}^{p-1} e^{i \pi \ell n} C_{np}^{(a)} \left( \cos \left( \frac{\pi \ell}{p} \right) \right). \end{equation} \noindent This is a generalization of Chu-Vandermonde. \end{note}
The techniques presented here may be extended to a variety of situations. Two examples illustrate the type of identities that may be proven. They involve the Hermite polynomials defined by \begin{equation} H_{n}(x) = (-1)^{n} e^{x^{2}} \left( \frac{d}{dx} \right)^{n} e^{-x^{2}}. \end{equation}
\begin{theorem} Let $m \in \mathbb{N}$. The Hermite polynomials satisfy \begin{equation} \label{multiH} \frac{1}{n!} H_{n} \left( \frac{x_{1} + \cdots + x_{m}}{\sqrt{m}} \right) = m^{-n/2} \sum_{k_{1}+\cdots + k_{m}} \frac{H_{k_{1}}(x_{1})}{k_{1}!} \cdots \frac{H_{k_{m}}(x_{m})}{k_{m}!}. \end{equation} \end{theorem} \begin{proof} Start with the moment representation for the Hermite polynomials \begin{equation} H_{n}(x) = 2^{n} \mathbb{E}(x + i N)^{n}, \end{equation} \noindent where $N$ is normal with mean $0$ and variance $\tfrac{1}{2}$. The details are left to the reader. \end{proof}
The moment representation for the Gegenbauer polynomials \eqref{mom-geg} yields the final result presented here.
\begin{theorem} Let $m \in \mathbb{N}$. The Gegenbauer polynomials $C_{n}^{(a)}(x)$ satisfy \begin{equation} \label{multiG} C_{n}^{(a_{1}+\cdots+a_{m})}(x) = \sum_{k_{1}+\cdots + k_{m}=n} C_{k_{1}}^{(a_{1})}(x) \cdots C_{k_{m}}^{(a_{m})}(x). \end{equation} \end{theorem}
\begin{remark} A relation between Gegenbauer and Hermite polynomials is given by \begin{equation} \lim\limits_{a \to \infty} \frac{1}{a^{n/2}} C_{n}^{(a)} \left( \frac{x}{\sqrt{a}} \right) = \frac{1}{n!} H_{n}(x). \end{equation} This relation allows to recover easily identity \eqref{multiH} from identity \eqref{multiG}. \end{remark}
The examples presented here, show that many of the classical identities for special functions may be established by probabilistic methods. The reader is encouraged to try this method in his/her favorite identity.
\noindent \textbf{Acknowledgements}. The work of the second author was partially supported by NSF-DMS 0070567.
\end{document} | arXiv |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.